Today we decided to try a flash game as our testing challenge. Albert Gareev first suggested the game "Lightbot" before Christmas, and i thought it had a lot of promise for a testing challenge. We looked through some options and decided that we'd present it as a two part challenge. The first part was that we'd give people who may not do much programming a chance to play with the game and see how it worked, to learn a bit about how to program and the concepts that are part of programming systems. The second was to treat the game as though it were an actual robot simulation and performing tasks on a factory floor.
What was most interesting about this particular session was the fact that my son, Nick, came by as I was setting up the game, and when he saw what it was, he asked if he could join the session, too. Since I run the session with two accounts (the Weekend Testers Americas account as a moderator, and with me as a participant, I told him to go ahead and take over my account and I'd use the Moderator's account exclusively. Shmuel Gershon, one of our weekend testing regulars, asked Nick if he'd be willing to pair test with him, which I was fine with.
After an hour of experimenting with the game we had a chance to talk about methodologies that we used and ways that the system could be improved and whether or not we trusted the system if it were a real robotics simulation, and with the subject a real robot. We all decided that, while the game was a great example of learning the basics of programming, it certainly was not sophisticated enough to be a true working robotics system. What we found was that programming was often best done in rapid iterations with lots of testing and experimenting to see if the ideas would work (and work repeatedly)... sounds a lot like Agile, huh?" In addition, we also saw that one solution which would work well for one situation would not work well (or at all) for another situation. we had to change our set of instructions frequently based on the context of what was happening at that given time. We also had to take into consideration what the end users of the product would accept or not accept with the designed functionality.
Albert was the one who suggested this idea, and as such, I asked if he would be willing to help us run the session this week. He said he'd be game with that, and so he did. After the session he and I discussed a few things and ideas about the sessions and we both agreed that there were a few things we could do to help the sessions run more smoothly:
1. Make sure that we encourage testers to hang around for a little bit to make sure they understand the charter and mission (we've had questions about this very thing in the past, so it's certainly reasonable).
2. Asking questions related to the Mission is vital, otherwise tangents can threaten to derail the session (again, this is very true, and it's one of the biggest challenges). Of course, often the tangents are really interesting, and sometimes reveal great insights, so I don't want to discourage them entirely!.
3. We often highlight certain sections in the chat session so that they will stand out as key areas worth noticing. usually the moderator does this but often so do the participants. this time quite a few participants used the option to emphasize points and offer praise to other session members, which frankly I thought was really cool.
3A. Albert mentioned that perhaps having the participants also head over to the bug database while the session is running may make for too large a distraction and lose a lot of valuable time. As a possible action, we could "hash tag" the chat items that are meant to be bug entries in the system. this way it would be really easy to see where in the chat someone is reporting a bug.
Albert also suggested that there be more than one moderator/facilitator to help guide discussions. we talked about the idea of having some designated moderators each session and rotating them each time we have a session, so that our regular participants also get a chance to moderate discussions and play as "crowd control". He also made a point that performing this forum management was an eye into test management from both the people and the process perspectives...specifically the value of not getting into micromanagement with other people is one of the biggest issues for beginners.
Finally, I liked how Albert said that, by running these sessions, I and the Weekend Testing crew give people "level-up experience" they couldn't earn realistically in months of working. That's a pretty awesome statement and endorsement, and all I can say is that "I hope we prove worthy of that level of praise!"
Again thanks for everyone who participated. For those who didn't, we hope that we will see you soon!
No comments:
Post a Comment