Today was an interesting example of what happens when we are trying to be clear. Actually, it's an example of what happens when we think we are being clear. Sometimes, though, no matter how hard we try, we see that we missed somewhere, and the message was interpreted differently. That's today's episode of Weekend Testers Americas in a nutshell.
Some background. I was all set to present a challenge where we would use one tool, a test application called Rapid Reporter, and use it to test another web application. the goal would have been to test Rapid Reporter and evaluate how it worked. I had my doubts, though, and I voiced them to Michael Bolton, who has been a great supporter of our getting the Weekend Testing movement off the ground here in the Americas. When I described what I was planning, and my concerns that we were biting off more than we could chew, he agreed. It's tough to get a charter that would cover one hour and then allow for a debrief of one hour. That's just not a lot of time. So Michael suggested a different tactic; let's just focus on gathering information about Rapid Reporter, focus on it's pluses and minuses, and use this as a prerequisite to a future testing session.
As we talked about this, we were referring to the Rapid Reporter application and the application we were going to test so that we could utilize Rapid Reporter. Thus, I had set the expectation for myself that Rapid Reporter would be evaluated so as to see how effective a tool it would be to test other tools (some of you are following this train of thought and thinking "oh boy, I know where this is going"... and you'd be right, but bear with me).
Wwhen we started the session, we decided that we would just evaluate Rapid Reporter, and look to make a mind map or feature list of its attributes. Here's the final mission statement:
Produce either a list or a mind map of the features of Rapid Reporter /with the goal of guiding a future session of testing/.
This wording made for two paths of thought with the participants:
1. A group of testers thought that this meant that we should evaluate Rapid Reporter's features with the goal of testing RAPID REPORTER in the future.
2. A group of testers thought that this meant that we should evaluate Rapid Reporter's features with the goal of testing another application while using Rapid Reporter in the process.
Who was correct? Who met the mission statement? Some vigorous discussion followed on regarding this, and I concluded that both groups were right, based on the wording of the charter. Group one looked at it in one context, and Group two looked at it in a different context. Both came up with feature lists and plans that would help them with their next testing session, it's just that one team interpreted it to mean the next testing sessions would be to test Rapid Reporter itself, and another group chose to test another application.
I did not plan this outcome or this discussion, and therein lies the real beauty of the process of Weekend Testing, and I'd dare say perhaps its most valuable teaching. In the short time we have, we need to do a lot, and we tend to learn a lot. In this case, the context of the testing came into play, and we considered different contexts of the testing and features we were evaluating, based on what we interpreted the end goal of the session to be.
As the facilitator, it's my job to keep the session on its track, but occasionally, there are comments and experiences that are just too interesting to close down or halt. Letting this discussion carry its course helped a lot of people see a key principle. Words mean things, but they also mean things in different ways to different people, and they will see it through their own filter and bias. That's not being critical or mean, that's stating a fact. It's up to us to ask for clarification, and it also up to us who provide the charter to make sure there's no mis-communication (or in this case, to be un-deliberately and un-intentionally vague enough so that a happy accident takes place).
A lot of learning, a lot of fresh understanding, and some great ideas for the next time we test... and quite a bit of good understanding and mind-mapping of Rapid Reporter under our belts as well, so that, should we test another application and the testers want to use Rapid Reporter as a test tool to test another application, they will be in a good place to do exactly that :).
1 comment:
Thanks, Michael! That was really great session!
My write-up is here: http://automation-beyond.com/2010/12/15/wta03-mission-impossible/
My report on behalf of Darren McMillan and myself.
Dear Client,
We thank you for the opportunity to work on this mission.
*Summary*
We think, that employing Rapid Reporter in WT sessions will be of value for you and participants.
In the timeframe given, we have not discovered any major threats to the value, or serious risks, associated with the tool.
*The outcomes*
With Rapid Reporter, WT participants can on-the-fly create experience reports, which are the valuable part of WT sessions.
As Rapid Reporter is a product which is specifically intended to support session-based exploratory testing (ET), using it encourages learning of ET and provides an opportunity to practice SBET.
*The product*
The tool has an intuitive interface and is very easy in use.
It is lightweight but has powerful enough note-taking functionalities.
The tool has certain minor defects, and there is a room for enhancement.
It’s system requirements are Windows / .NET 3.5.
*Dependencies*
Although Rapid Reporter requires minimal effort for installation, considering the tight timeline of WT sessions, we recommend instructing participants to set up the tool prior to the session and to spend some time getting familiar with it.
Sincerely,
Albert Gareev & Darren McMillan
Post a Comment