Pages

Thursday, July 25, 2013

Learn to Question: 99 Ways Workshop #11

The Software Testing Club recently put out an eBook called "99 Things You Can Do to Become a Better Tester". Some of them are really general and vague. Some of them are remarkably specific.


My goal for the next few weeks is to take the "99 Things" book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.


Suggestion #11: Learn to Question. - Tony Bruce


At its most basic and fundamental level, software testing is the process of asking an application a question, receiving a reply, and based on that reply, formulating additional questions. From the answers we receive, we make determinations as to whether aspects of the application are working as they should, or if there are issues. That's all it is, and "that's all" is actually a galaxy of possibilities. We often lament that we don't get the answers that we want. Do we stop to ask ourselves "are we asking the right questions to begin with?"


Workshop #11: Understand How We Reason, and the Value of a Close Read

All right, that's definitely too big a chunk to put into a blog post, but if we were to place any emphasis in making an effort towards "Learning to Question", I would have to start with these two ideas. 

Critical Thinking is a slow process. It requires effort. It takes time. We need to resist snap judgments, and we need to try to see how we can determine a course of action by what we are presented. Typically this starts with applying reasoning to understand our world and the environment that shapes it. To keep the discussion simple, we'll focus on two primary modes of reasoning, inductive and deductive

Inductive reasoning is where we take specific facts, figures and data points, and based on those values, try to come to a broader conclusion. Inductive reasoning allows for the idea that a conclusion might not be right, even if all the specific premises used to reach the conclusion may be correct. Inductively derived conclusions are referred to as being strong or weak, and the strength or weakness is based on the probability of the conclusion being true. 

By contrast, deductive reasoning works the opposite way. It starts with a broad aspect, applies premises that are specific, and if the premises used are true, then the conclusion, by definition, is also "true". Deductively derived conclusions are rated on their validity and soundness. It is entirely possible to have a "valid" conclusion, but have the conclusion not be "sound". Often, this is referred to as a "fallacy".

So what do these skills have to do with asking questions? It's important to understand how we are reaching premises, and what we are doing to come to conclusions. The premises we examine will help us determine the questions we need to ask. Too often, we deal with a paradigm of "here's a question - what is the right answer?" Testers have to work from the opposite perspective. Our product is the answer, perhaps millions of answers. 


How do we shift to asking good questions? I think learning how to do a critical read is important, and understanding what is really needed for a given context. 


I will forever remember an answer that I gave in the BBST Foundations class that I took a few years ago. I read the scenario, and I was off like a bolt of lightning. I wrote down what I thought was a brilliant answer. It was detailed, leaving nothing out… and basically I received an across the board agreement that I "failed" the question. Why? Because the answer I gave didn't answer the call of the question. It answered what I wanted to have the question to be about, but the actual question that was asked was never answered. I didn't give a critical read to really understand what the question was asking. Additionally, if we don't give a critical read to the materials that comprise the details of an application, a story, a tool, or a customer inquiry, we might put our own biases and opinions on what we need to examine, and totally miss what really needs to be looked at.

One of the things I recommend to students in the BBST classes (based entirely on my own chagrin at blowing it so spectacularly), is to take the text that's presented (a spec, a story, a customer inquiry) and break up the text so as to be sure to isolate and identify the questions (if explicit) or write out the questions (if implicit). From there, go back and look at the details to see if the inquiry makes sense. If I can get a clear idea as to what we should be examining, that gives me the ability to craft questions that will best address the criteria I am interested in. In the event of an inquiry from a customer, I can tease out the supporting statements and the fluff and see if I can isolate what the real question(s) might be.


Bottom Line:

Research and read about aspects of logic and reasoning. Read up on inductive and deductive reasoning. Practice using the skills mindfully so that you understand where, in real world use, you are using one or the other. Grade your premises and your conclusions; see if they are strong or weak (if using inductive reasoning) and if they are valid and/or sound (if using deductive reasoning). As you practice these methods, keep a record of the questions that you ask. Write them down or record yourself speaking, but capture the questions. As you work through subsequent examples, see how your questions evolve over time, and see how well they help guide you to answers or other areas worthy of inquiry.


Additionally, get into the habit of finding the call of the questions you are asked. Separate out the non-essential aspects, and make sure that you are actually focusing on the real question, then work to answer it. Undoubtedly, in that process, you will develop more specific, deeper and probing questions. Repeat this process... forever.

No comments:

Post a Comment