I realized with all the talk I did a couple of weeks ago about PNSQC, and all of the talks I witnessed and participated in, I didn't talk very much about my own talk and, specifically, the Poster Paper presentations I made. I have to admit, when I took on the challenge of giving a Poster Paper presentation at PNSQC, I was doing it more out of curiosity than anything else.
The topic in question was my presentation of "Balancing Acceptance Test Driven Development, GUI Automation, and Exploratory Testing". I'd presented this paper in a 20 minute format at CAST, with Q&A following. I'd presented this talk in an hour long format, with Q&A following, at Agilistry, so I was really curious to see... could I deliver the same message, and get across the same essential points, with just 5 minutes of engagement, and could I do it multiple times?
The answer was very interesting, and not at all what I expected.
Instead of me delivering the same talk multiple times, each person or group of people would engage me in a different way. Each wanted to have a different discussion about the same material. What was also interesting was that the areas that I thought were the most important would still have some relevance, but an idea that I offered as a "take away" and kind of a small "well, you could do this" became much more pronounced. The little takeaway, I realized, was the real kernel behind what I could do. That little kernel has, I think the potential to be something much bigger, and frankly I'm interested in seeing it become something bigger.
What's that kernel? The idea that we need to take a different look at test automation. So often, we do A to B tests. We set up tests to run a number of scripted scenarios. We run multiple examples of essentially the same scripted tests. In my presentation, I likened this to a train. Sure, with randomization, you could switch the order of the containers and the order of the cars, but they're still behind an engine, and they are still taken, on the rails, from Point A to Point B. If we go along for the ride, the view rarely changes.
As part of my presentation, as an off handed "takeaway", wouldn't it be interesting if instead of treating our tests like a train, we treated them like a taxicab? What if taxicabs were our operational metaphor? Imagine making our tests so that they could take us to the deepest darkest corners of our applications that we test. Once we reach those mysterious back alleys, the cab lets us off... and now the fun begins. What is it like to work on a system where a user has just logged in 10,000 times? Suppose you had a way of filling in every single visible text field on a site, and each confirmation took you to some place you might never have considered?
I thought the idea was interesting. The participants at PNSQC, each time I delivered the conversation, kept pushing me... tell me more about this taxicab! How could we use it? What would we have to do to set such a thing up? How could we develop such a setup? The truth is, I don't have an answer for that, at least, not yet... but I have to say I am itching to see how we could do it!
Had I just presented my talk, that taxicab element might still just be an interesting theoretical tidbit. Now, after so many discussions with so many different people, I realize this is something that could really change the way that test automation is both perceived and performed, and I'd love to be part of that reality. I often said I'd need a good reason to want to dig deeper into programming beyond necessity; it would need to be an authentic problem that fascinated me. I may have just found it!
Wednesday, October 31, 2012
Wednesday, October 24, 2012
TESTHEAD REDUX: Testing Training? What's That?
Wow, what a difference two and a half years makes!
When I first started my blog, this was one of my first entries (entry number 5, to be exact). It was my first firing salvo, aimed at my frustration as to the value of the testing training I had seen to date, most of which, even then, I was not really thrilled with. I should also note that this is the first entry that received a comment... and it was from Matt Heusser! Interesting to think that this would be my first "formal" meeting of Matt, and what has transpired in the 30 months since would be, well, amazing to discuss... but that's another story (possibly a book's worth of material, to tell the truth).
Having said all that, my point was, I felt it was time to come back and see what I said and what I believe today, now that I'm older and, supposedly, wiser... or more jaded and cynical. Sometimes the two can be hard to tell apart ;).
Today's reality is that there are two types of testing training that exist, and they can vary in value. There is what I call "vocabulary training", where the goal is to cover a lot of ground at a fairly theoretical level, and then take an exam or series of exams to show how well you know the material. The other is what I call "experiential training", where you actually get your hands dirty and test, try things out, talk about your experiences, and have others review your progress. I currently help run and facilitate two versions of this today.
The first and most formal version is the Association for Software Testing's Black Box Software Testing classes. These combine both the theoretical and the experiential aspects, and they allow testers the chance to test their understanding and challenge other participants understanding. So whereas, before, I would say there's nothing out there, today I would say that is absolutely not true.
The second source of "experiential learning" that I am involved with is Weekend Testing, and these events are almost exclusively experiential, done with a Skype connection and a group of willing and active participants. There are chapters around the world, but the India and Americas chapters seem to be the most active at this immediate point in time. Experience level varies dramatically, peer testing is an attribute, and each session is different from the one before. There's no need to feel "I don't know enough to participate." We treat it as a safe environment for everyone to come in, learn, practice their craft, and make connections and share ideas. It's also a lot of fun.
Today, while there are many who have sought out formal training, the vast majority of testers that I talk to still have this issue. Things are changing, in that I think the Meet-up culture is starting to encourage people to get out and talk about these things and offer alternatives to official training activities.
This has become my number one source for information. several sites do aggregation of just tester's blogs, so those sites are my first stop if I am really interested in seeing what testers have to say and what ideas are being discussed. What's interesting to see if who has changed in my blog roll since I started this. That's to be expected; bloggers come and go based on their energy levels and what they are involved in. What is cool is to see how many are still there two and a half years later that were on my original list :).
Still mostly true, but I have to add two more well deserving groups. The first is the Association for Software Testing (AST). I didn't even know they existed when I first started my blog, but I've certainly gotten to know a lot more about them since then! Helping deliver, maintain and develop their educational offerings has been an eye opening experience, and it has helped me interact with a bunch of terrific people, many of whom have become good friends since. The second is the Software Testing Club and "The Testing Planet". This loose knit confederation of testers has some great discussions, and I always look forward to when the next issue of TTP is available.
Since this was written, I've discovered lots of areas where we can get more information and see live screencasts and pre-recorded webinars of topics useful to testers. Add to that the phenomenon of entire courses of study being made available online. Think of Coursera, Khan Academy, and other initiatives that have become well known that are making it possible for anyone willing to invest the time and energy to learn about any topic. I've also found myself branching into more software development discussions and groups, even though my primary focus is not programming. Codecademy and NetTuts+ are two great sites for this type of interaction.
Books: These were the titles that made my short list two and a half years ago:
What titles do I consider essential today? While many of these are still valid, I've found that I'm turning to different resources now. "Testing Computer Software" has been replaced by "Lessons Learned in Software Testing" (Kaner, Bach & Pettichord) as my backbone "go to" testing book. In addition, Gerald Weinberg's "Perfect Software and Other Illusions About Testing" has become a perennial favorite. I will still recommend "Linchpin" and "Secrets of a Buccaneer Scholar", and add "Explore It" by Elisabeth Hendrickson.
Interestingly, the books that have helped my testing the most have been, shall we say, not testing books at all, but rather, those that focus on philosophy and inquiry, and help us see things in a different way, or at least understand how we have come to see different things over time. I'll also give a plug for my favorite "book discovery" of them all, in that it's not really a book, but a companion volume to a television series; "The Day the Universe Changed" by James Burke.
Having had a chance to see many discussions about the content and the accuracy of the information, I will now say that "Caveat Lektor Wiki" still applies, but yes, much of the information would stand to being an introduction to ideas that testers may not be as familiar with. Having said that, I also think that we as a community have the responsibility to review the information, and if we see it is in error, challenge it or add our own voices to explain why. It's our collection of experiences that makes that repository, so if you find something is in error or is badly worded, do your part to help make the explanations better (said with a healthy dose of 'Physician, heal thyself!", I might add).
Just like two years ago, I still agree that knowledge begets knowledge, training begets training, and opportunity begets opportunity. Also, as Matt mentioned in the comment to my original post, if you can't get to an official training opportunity, band together with other testers in your area and make your own. Hold a local peer conference for a day, have a writing workshop on testing ideas, host your own local bug party. Whatever it takes, there's rich ore to be mined out there, it just requires that we pick up a shovel and dig.
When I first started my blog, this was one of my first entries (entry number 5, to be exact). It was my first firing salvo, aimed at my frustration as to the value of the testing training I had seen to date, most of which, even then, I was not really thrilled with. I should also note that this is the first entry that received a comment... and it was from Matt Heusser! Interesting to think that this would be my first "formal" meeting of Matt, and what has transpired in the 30 months since would be, well, amazing to discuss... but that's another story (possibly a book's worth of material, to tell the truth).
Having said all that, my point was, I felt it was time to come back and see what I said and what I believe today, now that I'm older and, supposedly, wiser... or more jaded and cynical. Sometimes the two can be hard to tell apart ;).
To be fair, the above statement isn’t as common as it used to be. There are now many options regarding testing training and how to get it. However, up until just a few years ago, being a tester and getting actual training related to testing was not an easy endeavor. Training for programming? Yes. Training for systems administration? Definitely. Training for engineering principles? Just look at MIT or Caltech or any number of engineering schools. Still, you would have been hard pressed to find a school that had an actual course of study related to software testing (and for that matter, just finding a single course was not a wide spread available option).
Today's reality is that there are two types of testing training that exist, and they can vary in value. There is what I call "vocabulary training", where the goal is to cover a lot of ground at a fairly theoretical level, and then take an exam or series of exams to show how well you know the material. The other is what I call "experiential training", where you actually get your hands dirty and test, try things out, talk about your experiences, and have others review your progress. I currently help run and facilitate two versions of this today.
The first and most formal version is the Association for Software Testing's Black Box Software Testing classes. These combine both the theoretical and the experiential aspects, and they allow testers the chance to test their understanding and challenge other participants understanding. So whereas, before, I would say there's nothing out there, today I would say that is absolutely not true.
The second source of "experiential learning" that I am involved with is Weekend Testing, and these events are almost exclusively experiential, done with a Skype connection and a group of willing and active participants. There are chapters around the world, but the India and Americas chapters seem to be the most active at this immediate point in time. Experience level varies dramatically, peer testing is an attribute, and each session is different from the one before. There's no need to feel "I don't know enough to participate." We treat it as a safe environment for everyone to come in, learn, practice their craft, and make connections and share ideas. It's also a lot of fun.
Show of hands, how many testers had any genuinely formal training in how to do their job short of perhaps a quick orientation from a test team lead or test manager? How many people actually participated in training beyond this introductory level? If my experience is indicative of the majority of testers, I’m willing to bet that number is probably very low.
Today, while there are many who have sought out formal training, the vast majority of testers that I talk to still have this issue. Things are changing, in that I think the Meet-up culture is starting to encourage people to get out and talk about these things and offer alternatives to official training activities.
Testing Blogs: If you look to the side of these posts, you will notice that I have a roll of a number of test blogs. There are hundreds, but these are the ones I keep returning to again and again. I return to them because they challenge the way that I think about things, or they provide me with solid information and ideas to explore. There are hundreds more out there, and to be totally honest, I’d love to have TESTHEAD fill that role for people as well (it’s going to take awhile to develop the credibility so that it will be worth that to someone, but hey, I aim to try :) ).
This has become my number one source for information. several sites do aggregation of just tester's blogs, so those sites are my first stop if I am really interested in seeing what testers have to say and what ideas are being discussed. What's interesting to see if who has changed in my blog roll since I started this. That's to be expected; bloggers come and go based on their energy levels and what they are involved in. What is cool is to see how many are still there two and a half years later that were on my original list :).
Testing Magazines and Communities: There are several magazines and groups that publish information that is readily available to everyone, and while they still require a subscription fee to get everything they offer, the amount of information they offer to the general public for free is substantial. Some of my personal favorites are Software Test and Performance magazine and their online collaboration site and Better Software magazine and the folks at Sticky Minds.
Still mostly true, but I have to add two more well deserving groups. The first is the Association for Software Testing (AST). I didn't even know they existed when I first started my blog, but I've certainly gotten to know a lot more about them since then! Helping deliver, maintain and develop their educational offerings has been an eye opening experience, and it has helped me interact with a bunch of terrific people, many of whom have become good friends since. The second is the Software Testing Club and "The Testing Planet". This loose knit confederation of testers has some great discussions, and I always look forward to when the next issue of TTP is available.
Webinars and Online Training Sessions: both STP and Better Software listed above host a series of webinars that cover different areas of software testing on a regular basis. Most of the sessions are free and are open to the public, and many of the sessions are “tool agnostic”, meaning that they talk about principles and practices that can be used with any of the common test tools, or applied to home grown solutions.
Since this was written, I've discovered lots of areas where we can get more information and see live screencasts and pre-recorded webinars of topics useful to testers. Add to that the phenomenon of entire courses of study being made available online. Think of Coursera, Khan Academy, and other initiatives that have become well known that are making it possible for anyone willing to invest the time and energy to learn about any topic. I've also found myself branching into more software development discussions and groups, even though my primary focus is not programming. Codecademy and NetTuts+ are two great sites for this type of interaction.
Books: These were the titles that made my short list two and a half years ago:
- “Testing Computer Software” by Kaner, Falk and Nguyen
- “Effective Methods of Software Testing” by William Perry
- “Linchpin” by Seth Godin
- “Secrets of a Buccaneer Scholar” by James Bach
- “Software Testing: Fundamental Principles and Essential Knowledge” by James McCaffrey
- “Software Test Automation” by Mark Fewster and Dorothy Graham
- “Surviving the Top Ten Challenges of Software Testing” by William Perry and Randy Rice
What titles do I consider essential today? While many of these are still valid, I've found that I'm turning to different resources now. "Testing Computer Software" has been replaced by "Lessons Learned in Software Testing" (Kaner, Bach & Pettichord) as my backbone "go to" testing book. In addition, Gerald Weinberg's "Perfect Software and Other Illusions About Testing" has become a perennial favorite. I will still recommend "Linchpin" and "Secrets of a Buccaneer Scholar", and add "Explore It" by Elisabeth Hendrickson.
Interestingly, the books that have helped my testing the most have been, shall we say, not testing books at all, but rather, those that focus on philosophy and inquiry, and help us see things in a different way, or at least understand how we have come to see different things over time. I'll also give a plug for my favorite "book discovery" of them all, in that it's not really a book, but a companion volume to a television series; "The Day the Universe Changed" by James Burke.
Wikipedia’s Software Testing Portal: [...] The Wikipedia software testing portal is an example of where this vast resource of people and small contributions comes together to make for a very large repository of information related to testing. Note: a phrase I famously use among friends and colleagues is “Caveat Lektor Wiki”, and this is no exception. Using Wikipedia as a sole resource for anything is never a good idea, but to get started and develop some basic ideas and understanding, it’s a great tool, and again, will provide many jumping off points if you wish to explore them.
Having had a chance to see many discussions about the content and the accuracy of the information, I will now say that "Caveat Lektor Wiki" still applies, but yes, much of the information would stand to being an introduction to ideas that testers may not be as familiar with. Having said that, I also think that we as a community have the responsibility to review the information, and if we see it is in error, challenge it or add our own voices to explain why. It's our collection of experiences that makes that repository, so if you find something is in error or is badly worded, do your part to help make the explanations better (said with a healthy dose of 'Physician, heal thyself!", I might add).
Just like two years ago, I still agree that knowledge begets knowledge, training begets training, and opportunity begets opportunity. Also, as Matt mentioned in the comment to my original post, if you can't get to an official training opportunity, band together with other testers in your area and make your own. Hold a local peer conference for a day, have a writing workshop on testing ideas, host your own local bug party. Whatever it takes, there's rich ore to be mined out there, it just requires that we pick up a shovel and dig.
Thursday, October 18, 2012
It's Worth The Blood Loss! Sidereel Recruiting Video
This post is just for the fun of it today :)!
Sidereel is hiring. Specifically, we are looking for Full Stack Rails Developers, Front End Developers, DevOps people, and also content curation specialists (a Sidereel Content Editor, basically).
There are two faces to what we do. First is the Sidereel.com website itself, which is where you can search for television shows, watch episodes, and track what you have watched so you can be alerted when new episodes and shows appear. Making sure that new features and the everyday functionality of the site works, and trying to find creative ways to discover areas that need improvement, that's my job :)!
Another aspect to Sidereel is the original content that we produce. In addition to having it be embedded in our primary site, we also have an aggregation of our original content on Celebified.com. This contains news, gossip, previews, personal takes, interviews, and spoof/parody content. Some of our parody content in the past has been such things as making a Fast Fave music video ("Trunk of My Soul"), a sock puppet version of Jersey Shore called, appropriately enough, Jersey Sox, and we also did a parody on the Office meets True Blood called "Fang in there Bro".
It was the unexpected success of the "FitB" series that made us decided that we'd make our call out to the development community that we're hiring in a more, well, unique way. The net result is our short film "Working at SideReel: Definitely Worth the Blood Loss". We think it's funny, and we had fun making it, so here, for posterity's sake, is Sidereel's "Vampair" approach to Agile and programming. Enjoy!!!
Shadow Boxing: Round 1
Today is the first day of an interesting experiment. Well, it's the first day of the culmination of a month's worth of build up, follow up, and follow follow up.
This morning, I took what I hope will be the first step in changing the dynamic of the way my brain and I work. This is day one on Concerta.
I'm starting out with 36mg to see how it affects me. It's too early to tell, but I took the medication at 5:00 a.m. so I could be in s spot to comment on how it feels when I post this. I have to say, it's strangely... non intrusive. I don't feel much different, except for one thing. I feel less inclined to multi-task. Again, I have no idea if this is projection or wishful thinking, but I feel less "scattered" and more willing to just deal with things.
My dad and I talked about this yesterday, and he said it's highly likely I'll have one of three experiences:
1. I will likely feel a little more energetic, a bit more focused, and probably feel overall better about myself.
2. I will feel jittery and aggro, and probably not enjoy the experience very much.
3. I may not notice much of a difference at all.
Again, it's way too soon to tell, I'll probably need a month's worth of examination on my part and a follow-up with the psychiatrist to know how well we are really doing.
Overall I have to be prepared for a few things: this is not a silver bullet, and I cannot expect some magical cure. Also, the world has changed, dramatically, when I was a kid. Being a person with ADHD isn't such a stigma any longer, and coming to grips with issues and actually doing something about them tends to be seen more favorably than pretending issues don't exist, and wishing they'd just go away on their own accord. As my psychiatrist reminded me, ADHD doesn't go away. You really don't grow out of it. On the bright side, it can be treated, and at least from my initial reactions, Concerta doesn't feel like it will be all that bad. Time will tell, I guess :).
This morning, I took what I hope will be the first step in changing the dynamic of the way my brain and I work. This is day one on Concerta.
I'm starting out with 36mg to see how it affects me. It's too early to tell, but I took the medication at 5:00 a.m. so I could be in s spot to comment on how it feels when I post this. I have to say, it's strangely... non intrusive. I don't feel much different, except for one thing. I feel less inclined to multi-task. Again, I have no idea if this is projection or wishful thinking, but I feel less "scattered" and more willing to just deal with things.
My dad and I talked about this yesterday, and he said it's highly likely I'll have one of three experiences:
1. I will likely feel a little more energetic, a bit more focused, and probably feel overall better about myself.
2. I will feel jittery and aggro, and probably not enjoy the experience very much.
3. I may not notice much of a difference at all.
Again, it's way too soon to tell, I'll probably need a month's worth of examination on my part and a follow-up with the psychiatrist to know how well we are really doing.
Overall I have to be prepared for a few things: this is not a silver bullet, and I cannot expect some magical cure. Also, the world has changed, dramatically, when I was a kid. Being a person with ADHD isn't such a stigma any longer, and coming to grips with issues and actually doing something about them tends to be seen more favorably than pretending issues don't exist, and wishing they'd just go away on their own accord. As my psychiatrist reminded me, ADHD doesn't go away. You really don't grow out of it. On the bright side, it can be treated, and at least from my initial reactions, Concerta doesn't feel like it will be all that bad. Time will tell, I guess :).
Saturday, October 13, 2012
Back to Reality, and Lessons Learned
Whew! That was a seriously busy five days. From early Sunday morning (being dropped off at the airport at 6:00 AM) until Thursday night around midnight (when i finally landed and got picked up at the airport), for all practical purposes, I had the experience of eating, drinking, breathing and sleeping nothing but software quality talk.
I live blogged four of those days, and a lot of that still needs to be cleaned up, but some interesting things came out of those sessions and conversations, many of them totally serendipitous and unrelated to any of the actual sessions.
What started as a simple paper proposal a number of months ago has now become an all encompassing goal of mine to pursue and discover more about. A sideline topic and little takeaway that I had said could be a cool by-product, via multiple discussions, turns out could be a huge change in the way that testers actually approach automation and the benefits we can derive. A poster paper that was used to help me make the main points of the way that ATDD, GUI Automation and Exploratory Testing fit together now hangs (I believe) in an area at IBM, where one of the conference participants said that those ideas were really powerful, and would greatly inspire her team.
These are the factors about conferences that never really make their way into experience reports, or the talks that people have afterwards. Some of the best takeaways I had this week were during my poster paper interactions, or during side conversations after the fact or between sessions. They were discussing ideas at lunch, or while finding a Fijian food truck up near Powell's books. Those are the thoughts and comments that have been really resonating with me.
In many ways, we don't go to these events and get our world rewired in one fell swoop. More often, we get a fresh appreciation about something we are already doing, or we discover something we thought we were doing, but on further reflection, realizing we were not, or at least not as well as we could be doing. Most important for me was the fact that I had a chance to connect with my tribe, the tribe of professional software testers, and do so in an environment that is not just us, or where I'm the odd man out at a specific technology conference. PNSQC draws many people interested in software quality together. Many are testers, many are developers, and the nice thing is that this environment is much less "us vs. them" and more of "how can we all help each other do better work and make better software?"
For this, and for many other things, I thank you, Portland, and all who came to PNSQC and ATONW. Thank you for your thoughts your insights, your humor, your horror stories, and the gentle reminder that, while for many things, it's not just me, for a few things, it is just me! The good news, for those areas where it is just me, I've learned a number of new ways to deal with them. That spells out a successful week in my mind.
I live blogged four of those days, and a lot of that still needs to be cleaned up, but some interesting things came out of those sessions and conversations, many of them totally serendipitous and unrelated to any of the actual sessions.
What started as a simple paper proposal a number of months ago has now become an all encompassing goal of mine to pursue and discover more about. A sideline topic and little takeaway that I had said could be a cool by-product, via multiple discussions, turns out could be a huge change in the way that testers actually approach automation and the benefits we can derive. A poster paper that was used to help me make the main points of the way that ATDD, GUI Automation and Exploratory Testing fit together now hangs (I believe) in an area at IBM, where one of the conference participants said that those ideas were really powerful, and would greatly inspire her team.
These are the factors about conferences that never really make their way into experience reports, or the talks that people have afterwards. Some of the best takeaways I had this week were during my poster paper interactions, or during side conversations after the fact or between sessions. They were discussing ideas at lunch, or while finding a Fijian food truck up near Powell's books. Those are the thoughts and comments that have been really resonating with me.
In many ways, we don't go to these events and get our world rewired in one fell swoop. More often, we get a fresh appreciation about something we are already doing, or we discover something we thought we were doing, but on further reflection, realizing we were not, or at least not as well as we could be doing. Most important for me was the fact that I had a chance to connect with my tribe, the tribe of professional software testers, and do so in an environment that is not just us, or where I'm the odd man out at a specific technology conference. PNSQC draws many people interested in software quality together. Many are testers, many are developers, and the nice thing is that this environment is much less "us vs. them" and more of "how can we all help each other do better work and make better software?"
For this, and for many other things, I thank you, Portland, and all who came to PNSQC and ATONW. Thank you for your thoughts your insights, your humor, your horror stories, and the gentle reminder that, while for many things, it's not just me, for a few things, it is just me! The good news, for those areas where it is just me, I've learned a number of new ways to deal with them. That spells out a successful week in my mind.
Thursday, October 11, 2012
#ATONW: Agile Open Testing Northwest, Live from Portland
OK, I admit it, I thought I was going to have a travel day and take it easy, but amazingly, it turns out that a local group in Portland decided to piggy-back onto PNSQC and have a dedicated Agile Testing day. Thus, here I am, hanging out with mostly people from Portland and thereabouts, with Matt, Ben, and Doc also in tow, for what looked like too good an opportunity to pass up.
Agile Open Testing Northwest, an open space conference, is happening right now. The theme is "Agile Testing: How Are WE Doing It?" For those not familiar with open space, it has five principles:
Whoever shows up are the right people.
Whenever it starts, it starts.
Whenever it's over, it's over.
Wherever it happens is the right place.
Whatever happens is the only thing that could.
That chaotic aspect totally appeals to me. What's really cool is, I have no idea where this is going to go, and what will be covered. If you want to come along for the ride, I welcome you to :).
---
Matt Heusser and Jane Hein joined forces to talk about "Refactoring Regression" or more specifically, how can we work to make regression more effective, less costly, less time consuming and more effective? A god group of contributors are sharing some of their own challenges, especially with Legacy development being converted to Agile. Some of the ideas we discussed and considered were:
To have our Sprints include "Regress" meaning to have a sprint that's heavy on Regression testing to ensure that we are covering our bases (and potentially automating as much as we realistically can so we limit our "eyeball essential tests" to those that our most critical steps are done.
Utilize Session Base Test Management to capture and see what we are really covering.
Co-Opt as much of our regression testing into our Continuous Integration Process. even if it adds time to our builds, it may ultimately save us time by leaving just the most critical human interacting tests to those that we need to manually run to check via regression.
Ultimately, the key nugget to this is that we learn and adapt based on what we learned. During stabilization, are there any patterns that we can see that will help us with further development and testing? Where can we consolidate? Can we use tag patterns to help us divide and conquer. Interesting stuff, but we gotta move on...
---
Our next session, facilitated by Michelle Hochstetter, is "Bridging the Gap Between Dev and QA" and looking to see what is going on in our organizations to help make it possible to help each side of the equation level up and get more involved and be more effective. Many development teams profess to be Agile, but when it comes to testing (outside of unit testing, mocks, stubs, TDD, ATDD, Automated GUI Testing, etc), it's rare that the developers get involved that early. Often, what we see is a Scrum development team and a hand off to testing after the story has been mostly completed. In the Scrum world, this is often called "Scrummerfall". When this happens, then the test roles and the Dev roles are often isolated from one another. How can we prevent that? Or if we can't entirely prevent that, how can we effectively minimize it?
One approach suggested was to have developers be effective and savvy with software testing skills. Likewise, putting effort and emphasis so that the software testers could boost their development skills. Moving to a Kanban style system where the team had one piece flow (just one story at a time). Another aspect that we can use is to pair dev-test. That can sometimes face challenges with status, role, and who can do what. We sometimes get into a situation where we "protect our fiefdoms". One word, stop protecting them (OK, that's three words). Developers are perfectly capable in doing quality testing, they may just not have the vocabulary or the experience with the skills. Same way with testers. testers can very often code. They may not be full stack developers, but they often understand the basics and beyond, and can do effective work and appreciate development and design skills. Leverage them. What's more, have the humility to recognize and appreciate the differences, and do the homework necessary to get to a point where respect can be earned.
An interesting question... how could us testers be of better value and use to our programmers? One area that was discussed was the idea that the testers have to do a lot of useless testing (useless being a loaded term here deliberately; consider it repetitive, overdone testing on areas that programmers may not consider important or relevant). If the programmers have a lot of knowledge of the areas of the architecture (where on the blue print did you make changes; if you were in room C, does it make sense for me to look at the door between room B and C?), then help us as testers understand where and how those changes are relevant. Additionally, giving our programmers some of our exploratory testing tools and how to use them. Consider pairing developers and testers for both testing areas and programming maintenance. Encourage software testers to dive into code. Don't be afraid of developing bias. More information is more information. You may not use it all the time, but having it and knowing where to look and how they work can be very important and relevant.
---
Uriah McKinney and I led a combined session on "Building Teams: Will it Blend?" in which we look at areas where can help build solid Agile development and testing teams, and specifically the blend of skills where we can be as effective as possible. very often, we make snap judgements about people based on very little information. Sometimes that snap judgement is very favorable, but it limits the details that could be seen as detrimental. In other cases, the snap judgement could be very negative, but upon further reflection, shows a tremendous number of strength in other areas. Considering the ways that we interview and hire, there are unique pressures to different organizations. there's a difference between "filling a req" and making sure that you make a good cultural fit for a team.
Some things that we often need to consider is that, especially for junior team members, they may not have the jargon bingo down. Explaining Session Based Test Management may be a value for a team, or it may be more important to have someone who id adaptable and can be creative on the fly in a short amount of time. Don't penalize them for not knowing the term, engage them and see if they actually do what you are after. You may be surprised at how well they respond. We discussed this very thing with college interns.
We discussed misleading job titles and requirements. There's a frustration with a company advertising that they are looking for testers, when in reality they are looking for programmers whose goal is to write and create testing frameworks and supporting tests. Let's be honest with what we are looking for, and let's call them what they are. Don't say you are looking for a quality assurance engineer when you are looking for a programmer who writes test frameworks.
When we hire our people, it's also important to have the team members work in the way that will be most effective for the team. That requires a level of communication, and that communication means a clear understanding of the team mores and values needs to be respected, and that respect needs to run through for all. You can't play with different rules for your development team as compared to UX, graphic Design, Product Management, or testing. If the team has shared values, work to encourage everyone is consistent with practicing and sharing those values.
---
Michael "Doc" Norton led a discussion about "Managing Automated Tests" and ways that we can get a handle of what can often seem like an out of control Leviathan. One of the main issues that makes this challenging is that we tend to focus on running "everything". We have a zillion tests and they a;l have to be run, which means they have to take the time to be run, and the more frequently we need to run the full suite, the more time we consume. Thus, it's critical that we try to find ways to get a handle on which tests we run, when we run them, and under what external conditions. If the tests all run fantastically, then we are OK minus the amount of time. If something breaks, then we have to figure out what and where. If it's a small isolated area, that won't take very long. If we break a large scale, laddered test with lots of configuration options and dependencies, that gets harder to determine the issue and how to fix the issue.
One of the bigger issues is the story tents that, over time, get moved into more generalized areas. Over time, these generalized tests keep growing and growing, and ultimately, test either reach a level of long term regression test case, or they become stake and the links break and become unusable. Fragile integration tests tend to get more and more flaky over time, so rather than waiting until they become a large mess, take some time to make sure that you prune your tests from time to time.
Many tests can be streamlined and reused by setting them up to be data driven. With data driven tests, only the data values and expected output (or other parameters) need to be applied. You can keep the rest of the test details the same, and only the data changes (it won't work for everything, but you would be surprised how many areas it does work).
Another danger is to have tests that are treated like a religious ceremony. This is especially tricky when it comes to suites that run repeatedly and almost never find anything. Perhaps one thing to consider is that some of those tests are not so relevant. Consider consolidating those tests or even moving them out to another area.
Automated tests are ode too, they need to be treated like code. Being a Quality Analyst and being a programmer starts to blend together in this point. Often, this can get some funny reactions, ranging from developers and testers thinking each side has it easy. Coders think that testers have an easy job, and if they just learned it, they'd be able to do it. Too. Testers often think the same about programmers. Here's the thing. It took each of us close to 15 years to get good at our respective efforts, so a little respect is in order :).
---
The last session for the day was Janet Lunde's "Balancing Between Dev & QA on Automated Tests" (gee, can you tell where both my personal energy and my anxieties are today ;)?). Who owns the test that are made for automation? Does the programming team? Does the testing team? Should there be a balance between them? Dale Emory said that he sees a lot of institutionalized "laissez faire" with the way that tests are implemented within many organizations. Often, tests are seen as just those things that testers run and then there's some numbers and a "red light/ green light" and we go on our way.
What if we could re frame these arguments? Who is the audience for the automated tests? What's the ultimate value of running them? Often, having the testers writing the tests prove to be a bottleneck due to experience level with coding. Ben Simo shared an example like this, and ho the development team was asked to run a number of the tests on the testers framework and critique/offer suggestions. With that, the development team started using the framework, and started writing tests on the framework. This allowed for a lot of bottlenecks to be cleared. More to the point, it allowed the developers to better understand the testers point of reference. This helped flesh out their tools and their knowledge, so that the testers, later, would be able to take over the automation. At that point, they were in a much more stable place to write more tests more quickly.
Talking about the balances of automation has been amusing to see how much automation and how much manual testing is done. A lot of manual testing isn't just desirable, it's necessary. from my own paper, automation is great at parsing data, piping inputs to outputs, looking for regular expressions and creating logs. What it can't do is create curiosity, make a real sapient decision, and "look" at the software. If an automation test is looking for elements to appear on the screen, it will tell you elements are there. If it's not programmed to tell you if the CSS rules are loaded. That file could be missing, and the raw HTML without styling displayed, and the tests would still pass. The machine would miss the error, the human eyes would not. Another great comment... "automation doesn't get frustrated". What that means is "automation doesn't get irritated at response time or how much data needs to be entered. A real human will voice displeasure or frustration, and that's a valid piece of test data.
The key takeaway from the group was that "automated testing is a necessary item, but it does not replace real world, active, dynamic testing". While we can take out a lot of the mind numbing, repetitive stuff with automated testing, we can also use it to bring us to interesting place we might never have considered (see my presentation and my comments about Taxi cab automation).
---
And with that, I think I'm done for the day. I have to get ready to close out this great conference day, take the Met to the Portland Airport, and impatiently wait to get home to see my family :). thanks for spending the last several days with me. I've learned a lot. I hope you have learned something through me, too :).
Agile Open Testing Northwest, an open space conference, is happening right now. The theme is "Agile Testing: How Are WE Doing It?" For those not familiar with open space, it has five principles:
Whoever shows up are the right people.
Whenever it starts, it starts.
Whenever it's over, it's over.
Wherever it happens is the right place.
Whatever happens is the only thing that could.
That chaotic aspect totally appeals to me. What's really cool is, I have no idea where this is going to go, and what will be covered. If you want to come along for the ride, I welcome you to :).
---
Matt Heusser and Jane Hein joined forces to talk about "Refactoring Regression" or more specifically, how can we work to make regression more effective, less costly, less time consuming and more effective? A god group of contributors are sharing some of their own challenges, especially with Legacy development being converted to Agile. Some of the ideas we discussed and considered were:
To have our Sprints include "Regress" meaning to have a sprint that's heavy on Regression testing to ensure that we are covering our bases (and potentially automating as much as we realistically can so we limit our "eyeball essential tests" to those that our most critical steps are done.
Utilize Session Base Test Management to capture and see what we are really covering.
Co-Opt as much of our regression testing into our Continuous Integration Process. even if it adds time to our builds, it may ultimately save us time by leaving just the most critical human interacting tests to those that we need to manually run to check via regression.
Ultimately, the key nugget to this is that we learn and adapt based on what we learned. During stabilization, are there any patterns that we can see that will help us with further development and testing? Where can we consolidate? Can we use tag patterns to help us divide and conquer. Interesting stuff, but we gotta move on...
---
Our next session, facilitated by Michelle Hochstetter, is "Bridging the Gap Between Dev and QA" and looking to see what is going on in our organizations to help make it possible to help each side of the equation level up and get more involved and be more effective. Many development teams profess to be Agile, but when it comes to testing (outside of unit testing, mocks, stubs, TDD, ATDD, Automated GUI Testing, etc), it's rare that the developers get involved that early. Often, what we see is a Scrum development team and a hand off to testing after the story has been mostly completed. In the Scrum world, this is often called "Scrummerfall". When this happens, then the test roles and the Dev roles are often isolated from one another. How can we prevent that? Or if we can't entirely prevent that, how can we effectively minimize it?
One approach suggested was to have developers be effective and savvy with software testing skills. Likewise, putting effort and emphasis so that the software testers could boost their development skills. Moving to a Kanban style system where the team had one piece flow (just one story at a time). Another aspect that we can use is to pair dev-test. That can sometimes face challenges with status, role, and who can do what. We sometimes get into a situation where we "protect our fiefdoms". One word, stop protecting them (OK, that's three words). Developers are perfectly capable in doing quality testing, they may just not have the vocabulary or the experience with the skills. Same way with testers. testers can very often code. They may not be full stack developers, but they often understand the basics and beyond, and can do effective work and appreciate development and design skills. Leverage them. What's more, have the humility to recognize and appreciate the differences, and do the homework necessary to get to a point where respect can be earned.
An interesting question... how could us testers be of better value and use to our programmers? One area that was discussed was the idea that the testers have to do a lot of useless testing (useless being a loaded term here deliberately; consider it repetitive, overdone testing on areas that programmers may not consider important or relevant). If the programmers have a lot of knowledge of the areas of the architecture (where on the blue print did you make changes; if you were in room C, does it make sense for me to look at the door between room B and C?), then help us as testers understand where and how those changes are relevant. Additionally, giving our programmers some of our exploratory testing tools and how to use them. Consider pairing developers and testers for both testing areas and programming maintenance. Encourage software testers to dive into code. Don't be afraid of developing bias. More information is more information. You may not use it all the time, but having it and knowing where to look and how they work can be very important and relevant.
---
Uriah McKinney and I led a combined session on "Building Teams: Will it Blend?" in which we look at areas where can help build solid Agile development and testing teams, and specifically the blend of skills where we can be as effective as possible. very often, we make snap judgements about people based on very little information. Sometimes that snap judgement is very favorable, but it limits the details that could be seen as detrimental. In other cases, the snap judgement could be very negative, but upon further reflection, shows a tremendous number of strength in other areas. Considering the ways that we interview and hire, there are unique pressures to different organizations. there's a difference between "filling a req" and making sure that you make a good cultural fit for a team.
Some things that we often need to consider is that, especially for junior team members, they may not have the jargon bingo down. Explaining Session Based Test Management may be a value for a team, or it may be more important to have someone who id adaptable and can be creative on the fly in a short amount of time. Don't penalize them for not knowing the term, engage them and see if they actually do what you are after. You may be surprised at how well they respond. We discussed this very thing with college interns.
We discussed misleading job titles and requirements. There's a frustration with a company advertising that they are looking for testers, when in reality they are looking for programmers whose goal is to write and create testing frameworks and supporting tests. Let's be honest with what we are looking for, and let's call them what they are. Don't say you are looking for a quality assurance engineer when you are looking for a programmer who writes test frameworks.
When we hire our people, it's also important to have the team members work in the way that will be most effective for the team. That requires a level of communication, and that communication means a clear understanding of the team mores and values needs to be respected, and that respect needs to run through for all. You can't play with different rules for your development team as compared to UX, graphic Design, Product Management, or testing. If the team has shared values, work to encourage everyone is consistent with practicing and sharing those values.
---
Michael "Doc" Norton led a discussion about "Managing Automated Tests" and ways that we can get a handle of what can often seem like an out of control Leviathan. One of the main issues that makes this challenging is that we tend to focus on running "everything". We have a zillion tests and they a;l have to be run, which means they have to take the time to be run, and the more frequently we need to run the full suite, the more time we consume. Thus, it's critical that we try to find ways to get a handle on which tests we run, when we run them, and under what external conditions. If the tests all run fantastically, then we are OK minus the amount of time. If something breaks, then we have to figure out what and where. If it's a small isolated area, that won't take very long. If we break a large scale, laddered test with lots of configuration options and dependencies, that gets harder to determine the issue and how to fix the issue.
One of the bigger issues is the story tents that, over time, get moved into more generalized areas. Over time, these generalized tests keep growing and growing, and ultimately, test either reach a level of long term regression test case, or they become stake and the links break and become unusable. Fragile integration tests tend to get more and more flaky over time, so rather than waiting until they become a large mess, take some time to make sure that you prune your tests from time to time.
Many tests can be streamlined and reused by setting them up to be data driven. With data driven tests, only the data values and expected output (or other parameters) need to be applied. You can keep the rest of the test details the same, and only the data changes (it won't work for everything, but you would be surprised how many areas it does work).
Another danger is to have tests that are treated like a religious ceremony. This is especially tricky when it comes to suites that run repeatedly and almost never find anything. Perhaps one thing to consider is that some of those tests are not so relevant. Consider consolidating those tests or even moving them out to another area.
Automated tests are ode too, they need to be treated like code. Being a Quality Analyst and being a programmer starts to blend together in this point. Often, this can get some funny reactions, ranging from developers and testers thinking each side has it easy. Coders think that testers have an easy job, and if they just learned it, they'd be able to do it. Too. Testers often think the same about programmers. Here's the thing. It took each of us close to 15 years to get good at our respective efforts, so a little respect is in order :).
---
The last session for the day was Janet Lunde's "Balancing Between Dev & QA on Automated Tests" (gee, can you tell where both my personal energy and my anxieties are today ;)?). Who owns the test that are made for automation? Does the programming team? Does the testing team? Should there be a balance between them? Dale Emory said that he sees a lot of institutionalized "laissez faire" with the way that tests are implemented within many organizations. Often, tests are seen as just those things that testers run and then there's some numbers and a "red light/ green light" and we go on our way.
What if we could re frame these arguments? Who is the audience for the automated tests? What's the ultimate value of running them? Often, having the testers writing the tests prove to be a bottleneck due to experience level with coding. Ben Simo shared an example like this, and ho the development team was asked to run a number of the tests on the testers framework and critique/offer suggestions. With that, the development team started using the framework, and started writing tests on the framework. This allowed for a lot of bottlenecks to be cleared. More to the point, it allowed the developers to better understand the testers point of reference. This helped flesh out their tools and their knowledge, so that the testers, later, would be able to take over the automation. At that point, they were in a much more stable place to write more tests more quickly.
Talking about the balances of automation has been amusing to see how much automation and how much manual testing is done. A lot of manual testing isn't just desirable, it's necessary. from my own paper, automation is great at parsing data, piping inputs to outputs, looking for regular expressions and creating logs. What it can't do is create curiosity, make a real sapient decision, and "look" at the software. If an automation test is looking for elements to appear on the screen, it will tell you elements are there. If it's not programmed to tell you if the CSS rules are loaded. That file could be missing, and the raw HTML without styling displayed, and the tests would still pass. The machine would miss the error, the human eyes would not. Another great comment... "automation doesn't get frustrated". What that means is "automation doesn't get irritated at response time or how much data needs to be entered. A real human will voice displeasure or frustration, and that's a valid piece of test data.
The key takeaway from the group was that "automated testing is a necessary item, but it does not replace real world, active, dynamic testing". While we can take out a lot of the mind numbing, repetitive stuff with automated testing, we can also use it to bring us to interesting place we might never have considered (see my presentation and my comments about Taxi cab automation).
---
And with that, I think I'm done for the day. I have to get ready to close out this great conference day, take the Met to the Portland Airport, and impatiently wait to get home to see my family :). thanks for spending the last several days with me. I've learned a lot. I hope you have learned something through me, too :).
Wednesday, October 10, 2012
Software Testing: Reloaded, Day 3 in Portland
This is going to be a little different compared to the last three days, in the sense that I'm moderating/facilitating a Workshop/Tutorial, and I have a long standing tradition of not broadcasting details of workshops because conference attendees pay to attend these. Therefore they deserve to have the opportunity to absorb and apply the materials first. Plus, it makes many of the exercises less useful if they are freely disseminated, so I'm not going to talk about any of the exercises directly... but I will talk about why I'm here and what I've witnessed while I've been here :).
So often, we go to conferences, we listen to people talk, we watch PowerPoint fly by... and in none of these sessions do any of us touch a keyboard, grab a device, and actually DO something. Well, today is all about doing something, and more to the point, my doing something is to work in support of Matt Heusser and Pete Walen's "Software Testing: Reloaded".
This is a day of testing exercises, games, questions, theories, conundrums, challenges, and things that should make you go "hmmm..." and so far, that last one has proven to be very successful. While I won't talk about the actual exercises and solutions, I will talk about one observation from this morning that I thought was somewhat fun. We (meaning Pete, Matt and Ben Simo) came into the room where we were holding the workshop, and proceeded to throw everything into chaos. The nice, neat, well ordered room was made to have tables spread all over, in random fashion, with too little and to many chairs, chairs up ended and placed on top of tables. We watched as people filtered into the room, looked a little perplexed and bewildered, and then sat down. Wherever they could find. Few participants said anything. They just sat and waited for us to talk. There was definite muttering, and definite confusion, but only two people asked what was going on, and when they asked if they could move the tables and we said "just have a seat", there were no more questions and they sat down.
As we started in, we asked if everyone was comfortable. Some said yes, some fidgeted a bit, but no one came out and said "uh, what's with the ridiculous layout of the room?" We thought for sure a set of testers would say something! At the end of our introductions of ourselves, we then asked everyone if they thought the room layout was ridiculous. Then they responded that the room layout seemed weird, and that they would like to change it. From there, we told everyone that their first priority was to set up the room in a way that they wanted, in a format that would work for them. The participants grouped into a number of teams, put their tables together and we went from there.
So why am I mentioning this one incident in particular? It struck me that as testers, we seem to intensely feel that we are not the ones in power, that we inherit the environment we work in, and that because of that, we often take what we are given without making much of a fuss. We may write down notes, we may observe what's going on, but do we often come out and say "whoa, what's going on here?!" While that may be a gross generalization, today's experiment seemed to show that to be the case. Even with a completely ridiculous room layout, few spoke up about it. If they did say anything, even vaguely, about it, our answer of "just sit down" stopped the conversation. None of the testers in the room further challenged it, none of the testers asked why we were doing this, or what our reasoning for this setup was.
I considered it a bit odd at first, but then, I knew the outcome. I knew what the goal was, so a part of me wanted to see if anyone would step up to the plate and really challenge what we were doing. I think too often, we just blend in, cause we know we are already the bearer of bad news most of the time, so being loud and up front may be a bit uncomfortable. How often do we put up with things because we don't want to make others uncomfortable, even if our silence leaves us uncomfortable? I know for me, it happens fairly often. Perhaps one lesson from this is that we should be willing to challenge conventional wisdom just a little more often, and not accept the first answer we get. If we are willing to do that, perhaps we can have a little more influence than we might otherwise have (end gross generalization).
So, how has your Wednesday been thus far :)?
Tuesday, October 9, 2012
Day 2 of PNSQC, Live from Portland
In my ever running tradition of Live Blogging events, I'm going to do my part to try to live blog PNSQC and the observations, updates and other things I see while I am here. Rather than do a tweet-stream that will just run endlessly, I decided a awhile ago that having an imprecise, scattered and lousy first draft level stream of consciousness approach, followed by cleaning up and making a more coherent presentation later to be a fairly good compromise. So, if you would like to follow along with me, please feel free to just load this page and hit refresh every hour or so :).
---
Here we go, day 2 of PNSQC, and we hit the ground running with Dale Emory's keynote talk "Testing Quality In". Dale starts out the conversation with the premise that testers are often under appreciated, and with that, their skills are often squandered. This is a tragedy, but one that we can do something about. What if we could identify the variables that affect the value of our testing?
One of the most common statements testers make is that "Testing Merely Informs". "Merely" in this case doesn't mean that the testing isn't important, it's that there's a limit to what we can inform and how far that informing can take us.
Dale walks us through an example of a discount value for a class of customers, and determining where we can add value and informing of the requirements CAN be an example of testing quality in, in the sense that it it prevents problems from getting made in the product. On the other hand, when we say we CAN'T test quality in, we're looking at the end product, after it is made, and in that case, it's also true.
Can you test quality into products? How about testing quality into decisions? Does our information that we provide allow our team(s) to make appropriate decisions? If so, do those decisions help to "test quality in"? Dave gave an example of one of the Mars Landers, where the flash file system filled up and couldn't reboot. Was it a bug to get to that state? Yes. Was it known? Again, yes. Why wasn't it fixed? Because when the bug was found, it was determined that it would push the launch date back by six weeks, which would mean Mars would be 28 million miles further away, and thus the launch team would be faced with a situation of "can't get there from here". With this knowledge, they went with it, knowing that they had a potential workaround if it were ever needed (which, it turns out, was the cae, and they knew what to do in that event). Could that be seen as testing quality in? I'd dare say YES. Apparently, Dale would, too :).
So why do we say "You Can't Test Quality In"? The reason is that, if we seem to start from a premise where we are running a lot of tests on a product that's already built, and if the product team do nothing with the information we provide, then yes, the statement stands. It's also a gross caricature, but the caricature is based on some reality. Many of us can speak plainly and clearly to something very much like this. The problem is the caricature has gotten more press than the reality. You absolutely can't test quality in at the end of a project when you have not considered quality at all during the rest of the process. Again, that's likely also a gross caricature, or at least I have not seen that kind of disregard on the terams I have been part of. Dale says that what we are actually doing is we are reacting to Ineffective Use of Testing (and often too late in the process). We also react to Blame ("Why didn't you find that bug?", or even "Why didn't you convince us to fix that?!"). Instead of fight, we should seek to help (hilarious reference to Microsoft Office's Clippy... "It looks like you are trying to test s lousy product... How can I help?")
Another question is "why don't people act on our information?" It's possible they don't see the significance (or we are having trouble communicating it). Often there may be competing concerns. It's also possibly that they just don't like you! So instead of just providing information, what if we were to give feedback? If feedback is accurate, relevant and timely, that also helps (scatological joke included ;) ). A way to help is for us to notice what we are really good at, and try to see how we can find those skills to give value in new ways. We are good at detecting ambiguity, we can detect ignorance, we can detect inconsistencies, we can imagine scenarions, consequences, etc.
What if we could test when we can find errors before we commit to it. This slides right into Elisabeth Hendrickson's advice in Explore It to Explore Requirements. Dale points it out to saying we should test "Anything about which your customers have intentions that have not been validated". We can test requirements, we can test features, we can test stories, and ultimately, we can test the team's shared understanding. We can test the understanding of our customer's concerns. If they are not reacting to our testing, it's possible we are not connecting with what is important to them. Dale makes reference to Elisabeth's "Horror Story Headlines"... what would be the most horrifying thing you could see about your company on the front page of the newspaper? Think of those concerns, and you can now get to the heart of what testing is really important, and what really matters to the team. Consider testing your testing. How well are your customers benefiting from the information you provide?
Dale then gave us some homework... What is one small thing that you can do to increase the value or appreciation of testing in your organization? Form me, I think it's finding a way to add value and help determine issue early in the process. Instead of doing the bulk of my testing after a story is delivered, try to find ways to test the stories before they are problems. In short, BE EARLY!
---
I had the chance to moderate the Test track talks. When we offer to moderate, sometimes we get talks that are assigned to us, sometimes we get to moderate the talks we really want to hear. This morning's talk is one of the latter, called "The Black Swan of Software Testing - An Experience Report on Exploratory Testing" delivered by Shaham Yussef.
The idea of a Black Swan is that it is an outlier from expectations; it doesn't happen often, but it can and does happen. It is likely to have a high impact, and we cannot predict when it will happen. Shaham considers Exploratory Testing as the Black Swan of software testing.
For many, Exploratory Testing is seen as a "good idea" and an "interesting approach", but we rarely consider it as a formal part of our test planning. Exploratory Testing, when performed well, can have a huge impact, because it can greatly increase the defect detection rate (perhaps even exponentially so). Exploratory Testing lets us look at issues early on, perhaps even in tandem with development, rather than wait until the software is "officially delivered". Scripted tests, as we typically define them, are often limited to pre-defined test cases, while Exploratory Testing is geared towards focusing on an area and letting the product inform us as to where we might want to go next. It's text execution and test design simultaneously performed.
Are there places where Exploratory Testing may not apply? Of course. In highly regulated environments where steps much be followed precisely, this is less of an effective argument, but even in these environments, we can learn a fair amount by exploration.
---
Lisa Shepard from WebMD is the next speaker and her topic is "Avoiding Overkill in Manual Regression Tests"
By a show of hands, a large number of people do manual testing, do some kind of manual regression testing, and can honestly say that they don't really know what tests they really need to run. Lisa shared her story of how she wrote hundreds of manual test cases and how she was making all of these cases and how that was a good thing. Fast forward two years, and many of those test cases were not relevant any longer, and the fact was that all of them were put into a Deprecated folder.
This is an example of how much time can be wasted because of over-documentation of all the tests that have already been run. Now a more fair question is to ask: what manual regression tests do we NOT want, or need to document?
At some time, we reach a point of diminishing returns, where the documentation becomes overwhelming and is no longer relevant to the organization. Web MD faced this, and decided to look at why they felt so compelled to document so many cases. Sometimes the quantity of tests is seen as synonymous with quality of testing. Sometimes the culture rewards a lot of test cases, just the way some organizations reward a large quantity of bugs (regardless of the value those bugs represent).
So what is the balance point? How do we write regression tests that will be valuable, and balance them so that they are written to be effective? There are lots of reasons why we feel the need to write so many of these tests.
Lisa thinks this is the one rule to remember:
All manual regression tests must be tested during regression testing
Huh? What is the logic to that? The idea is that, if every single test is run, certain things happen. A critical eye is given to determine which tests need to actually be run. If there's no way to run all of them during a test cycle, that should tell you something. It should tell you what tests you really need to have listed and that you need to physically run, and it will also help indicate which tests could be consolidated.
Now, I have to admit, part of me is looking at these examples and asking "Is there a way to automate some of this?" I know, don't throw things at me, but I'm legitimately asking this question. I realize that there are a number of tests that need to be and deserve to be run manually (exploration especially) but with a lot of the regression details, it seems to me that automating the steps would be helpful.
But back to the point of the talk, this is about MANUAL regression test. What makes a test WORTHY of being a manual regression test? That's important in this approach. It helps also to make sure that the testers in question are familiar with the test domain. Another idea to take into consider is the development maxim Dont Repeat Yourself. In the manual regression space, consider "Thou Shalt Not Copy/Paste". Take some time to go through and clean up existing test plans. Cut out stuff that is needlessly duplicated. Keep the intent, clear out the minuscule details. If you must, make a "To Be Deleted" folder if you don't trust yourself, though you might find you don't really need it.
Lisa made a number of metaphorical "bumper stickers" to help in this process:
'tis better to have tested and moved on than to have written an obsolete test'
'Beware of Copy and Paste'
'If you are worried about migrating your tests... YOU HAVE TOO MANY TESTS!!!'
'Less Tests = More Quality' (not less testING, just less test writing for the sake of test writing)
'Friends don't let friends fall off cliffs' (pair with your testers and help make sure each other knows what's going on).
'tis better to have tested and moved on than to have written an obsolete test'
'Beware of Copy and Paste'
'If you are worried about migrating your tests... YOU HAVE TOO MANY TESTS!!!'
'Less Tests = More Quality' (not less testING, just less test writing for the sake of test writing)
'Friends don't let friends fall off cliffs' (pair with your testers and help make sure each other knows what's going on).
---
The lunch session appealed to me, since it was on "Craftsmanship" and I had heard huge praise for Michael "Doc" Norton, so I was excited to hear what this was all about, and how the cross sections of function, craft and art come together, and how we often lose some of the nuance as we move between them. Many of us decided that we didn't want to be seen as replaceable cogs. We want to be engaged, involved, we want to be seen as active and solid practitioners in our craft (software development and testing, respectively). In the Agile world, there was a de-emphasis on engineering practices, to the point where the North American Software Craftsmanship Conference formed in protest to the Agile conference de-emphasis of engineering principles and skills in 2009, and devised a new Maifesto for Software Craftsmanship.
As we are discussing this idea, it has struck me that, for someo=ne like me, I have to seek out my mentoring or my opportunities for mentoring outside of my company. In some ways I am the sole practitioner for my "community", yet sometimes I wonder if the skills that I offer are of real or actual value. In some ways, we reward individualistic, isolationist behaviors, and it's only when those behaviors start to hurt us that we wonder "how could we let this happen? We live and work in a global economy, and because of that, there are a lot of people who can do competent, functional work, and because of their economic realities, they can do that work for a lot less than I can. What should I do? Should I hoard my skill and knowledge so that they can't get to it? For what? So that when I leave the game eventually, all of my skills are lost from the community? If I believe I have to complete on a functional level, then yes, that's a real concern. What do do? Stop competing on a functional level, and start growing in a craftsmanship and artist level. If we excel there, it will be much more compelling for us to say that we have genuine talent that is worth paying for.
What if we had a chance to change jobs with people from different companies? What if you could share ideas and skills with your competitors? What if teo startups could trade team members so they go and work for the other team for a month or two, with the idea that they will come back and share their newfound knowledge with each other and their teams? It seems blasphemous, but in Chicago, two start-up companies are doing exactly that. What have they discovered? They've boosted each others business! It sounds counter-intuitive, but by sharing knowledge with their competitors, they are increasing the size of the pie they both get access to.
We've had an interesting discussion following on about how Agile teams are in reality working and effectively dealing with issues of technical debt and how they actually apply the proceses that they use. Agile and Scrum means one thing on paper, but in practice varies wildly between organizations. Returning a focus to craftsmanship, allowing others to step up and get into the groove with others, balancing the ability to have a well functioning team with performing actual discipline growth is imperative. The trick is that this needs to happen on a grass-roots level. It's not going to happen if it's mandated from management. Well, it may happen, but it won't be effective. People in the trenches need to decide this is important to them, and their culture will help to define how this works for them.
---
Jean Hartmann picks up after lunch with a talk about "30 Years of Regression Testing: Past, Present, & Future", and how situations have changed dramatically from how to determine the number, and whether or not the cases are really doing anything for us. Regression testing is software testing’s least glamorous activities, but is one of the truly vital functions needed to make sure that really bad things don't happen.
Kurt Fischer proposed a mathematical approach where the goal is, if I do regression test selection, I want to pick the minimum number of test cases to maximize my code coverage". That was in the early 80's. At the time, the hardware and CPU speed was a limiting factor. Running a lot of regression tests was just not practical yet. Thus, in the 80's, large scale regression testing was still relatively theoretical. Analysis capabilities were still limited, and getting more challenging with the advent and development of stronger and more capable languages (Yep, C, I'm looking at you :) ).
In the 90s, we saw the hardware capacity and the increasing ubiquitous nature of networks (I still remember the catch phrase "the Network is the Computer") to open up the ability for great regression test cycles to be run. I actually remember watching Cisco Systems go through this process in the early 90's, and I set up several labs for these regression tests to actually be run, so yes, I remember well the horsepower necessary to run a "complete regression test suite for Cisco Systems IOS, and that was "just" for routing protocols. I say just because it was, relatively speaking, a single aspect that needed to be tested, Level Three networking protocols and configuration. That so called simple environment required us to set up twenty racks of near identical equipment so that we could parallelize what was effectively hundreds of thousands of tests. I can only imaging the challenge of testing a full operating system like UNIX, MacOS or Windows and all of the features and suites of Productivity software! Still, while it was a lot of gear to support it, it was much less expensive than it would have been in the previous decade.
During the 2000s, setting up environments to do similar work became smaller, faster and much less expensive. Ironically, for many organizations (maybe as a result of the dot-com bomb) many organizations scaled back large scale Regression Testing. One organization to not only not back away, but to double down, unsurprisingly, was Microsoft. Test suites for operating system releases frequently run into the hundreds of thousands of tests, both automated and manual. As you might imagine, test cycles are often measured in weeks.
Microsoft started using tools to help them prune and determine a more efficient allocation of test resources for cases, and in the process, Microsoft's Magellan Scout was developed for this purpose.
Additionally with the advent of less expensive hardware and with the ability to virtualize and put machines up in the cloud, Microsoft can spin up as many parallel servers as possible, and by doing so can bring down turnaround times for regression test suites. What does the future hold? It's quite possible the ability to spin up more environments and do so for less money wil continue to accelerate, and as long s that does, we can expect that the size of regression tet cycles for places like Micrsooft, Google and others will likely increase.
---
For the rest of the day, I decided to focus on the UX track, and to start that ff, I atended Venkat Moncompu's presentation on "Deriving Test Strategies from User Experience Design Practices". The initial focus on Agile Methodologies point to an idea that scripted tests with known expectations and outcomes gives us automated confirmatory tests as a focus. Human beings are best suited for exploration, because we are curious by nature.
While Exploratory Testing offers us a number of interesting avenues (free-style, scenario-based, and feedback driven) there is also an aspect that we can consider to help drive our efforts. That approach is using User Experience criteria. Think about the session based test management approach (charter - test - retrospective - adaptation ). Now imagine adding the User Experience criteria to the list (create personas, create profile data, and follow them as though they are real people) we start to see the product as more than just the functions that need to be tested, we start to see them in the light of the persons most likely to care about the success of the product.
So why does this matter?
While Exploratory Testing offers us a number of interesting avenues (free-style, scenario-based, and feedback driven) there is also an aspect that we can consider to help drive our efforts. That approach is using User Experience criteria. Think about the session based test management approach (charter - test - retrospective - adaptation ). Now imagine adding the User Experience criteria to the list (create personas, create profile data, and follow them as though they are real people) we start to see the product as more than just the functions that need to be tested, we start to see them in the light of the persons most likely to care about the success of the product.
So why does this matter?
Scenarios that can adapt to the behavior of the system offer more than scripted test plans that do not.
The personas and user scenarios provide us with a framework to run our tests. Exploration, in addition to developing test charters, also helps our testing adapt based on based on feedback we receive.
So why don't we do more of this? Part of it seems to be that UX is treated as a discrete component of the development experience. Many companies have UX directors, and that's their core competency. That's not to denigrate that practice. I think it's a good one, but we need to also get testers actively engaged as well. These are good methods to help us not just be more expressive with our tests, but ultimately, it helps us to care about our tests. That's a potentially big win.
---
The final session for today (well, the last track session) is a chance to hear my friend and conference room mate Pete Walen. Pete is also discussing User Experience Design, User Experience and Usability. UX comes down to Visual Design, Information Architecture, Interaction Design and Accessibility. Pete emphasizes that the key to all of this is "the anticipated use". We talk a lot about this stuff, and that's why Pete's not going to cover those aspects. None of those ideas matter if the people using the software can't do what they need to in a reasonable fashion.
Pete has decided to talk about ships that sank... figuratively. Thus, Pete decided to share some of his own disasters, and what we could learn from them. I couldn't help but chuckle when he was talking about being made a test lead for 100 developers... and he would be the sole tester for that initiative. No, really, I can relate! this first process was to replace an old school mainframe system with a modern feature friendly Windowed system, but using all of their old forms. Everyone loved it... except the people who had to enter the data! All the old forms were the same... except that error messages were rampant. The forms that were created were spreading one screen over five screens... and those five screens were not accounting for the fact that the form was expecting all the values to be filled in. The system was carefully tested, the odd behavior was referred back from the developers "as designed". Did the users approve of the design? Yes... if you mean the people who parse the data from the screens. What about the people who actually entered the data? They were not asked! Whoops! In short, the visual appeal trumped the functionality, and lost big time.
This was a neat exercise to show that the "user" is an amorphous creature. they are not all the same. They are not necessarily doing the same job, no do they have the same goals. Yet they are using the same application. In fact, the same person can do the same thing at different times of the day and have two totally different reactions. That happens more than we realie. We want our systems to support work, but often we make systems that impede it.
We are often guilty of systematic bias: we think that the outcome we desire is the one that should happen. this is also called "Works As Designed" Syndrome. In short, the people making the rules beter be involved in the actions that are being performed. If we want to make sure that the system works well for the users entering the data, we have to include them in the process. Pete used a military phrase to describe this process in a colorful metaphor... "when the metal meets the meat!" that means that YOU are part of the process, and if YOU are part of the process, and if the rules apply to you, then YOU will be part of the PROCESS. In short, if the system is being designed for you, YOU will be the one involved in making sure the design works for YOU.
Additionally, when you are dealing with systems, know that one component cannot be tested completely and no testing being done on other components is not going to end well. People do not see "the system" the same way you do. There is no single user. This is why personas ar so important. their experiences are intertwined, but they are different. even people doing similar processes and jobs are very often unique, and have their own needs based on location and focus.
The question around UX and UX stuff is that there is a fair amount of training required to do their job well. Hoe many testers have actually received any formal UX training? For that matter, how many testers have received any formal testing training from their company? [Hilarity ensues].
On a more serious note, someone recommended handing the official users manual to their testers and let them loose for awhile... and that's actually not at all a bad idea for some introductory UX training. Why? Because a users manual is (truthfully) the closest to a requirements doc many people will ever get their hands on! In my experience (and many other people's) a users manual is both a good guide and almost usually wrong (or at least inconsistent). A lot of questions will develop from this experience, and the tester will learn a lot by doing this. Interestingly, many services are being made with no manual whatsoever, so we're even losing that possibility.
Ultimately, we need to do all we can to make sure that we do the best we can when it comes to representing our actual users, as many of them as humanly possible. there are many ways to do that, and it's important to extend the net as broadly as possible. Buy doing so, while we can't guarantee we will hit every possible issue, we can minimize a great number of them.
---
You all thought I must be done, but you'd be WRONG!!! One more session, and that is with Rose City SPIN. Michael "Doc" Norton of Lean Dog is presenting a talk called "Agile Velocity is NOT the Goal!" Agile velocity is the measure of a given number of stories and the amount of work time it takes to complete them. According to Doc, Velocity is actually a training indicator. What this means is that we have to wait for something to happen before we know that something has happened. We use past data to help us predict future results.
Very often we end up "planning by velocity". Our prior performance helps us forecast what we might be able to do with our next iteration. In some ways, this is an arbitrary value, and is about as effective as trying to determine the weather tomorrow based on the weather today, or yesterday. Do you feel confident making your planning this way? What if you are struggling? What if you are on-boarding new team members? What if you are losing a team member to another company.
One possible approach to looking at all the information and forecasting is to instead use standard deviation (look it up statistics geeks, I just saw the equation and vaguely remember it from college). Good news, the slides from this talk will be available so you can see the math, but in this case, what we are seeing is a representative example of what we know right now. Standard deviation puts us between 16 and 18 iterations. that may feel uncomfortable, but it's going to be much more realistic.
One of the dangers in in this, and other statistical approaches, is that we run a dangerous risk when we try to manage an organization on numbers alone. On one side, we have the Hawthorne Effect, which is "That which is measured, will improve". The danger is that something that is not being measured is getting sacrificed. Very often, when we measure velocity and try to improve the velocity, quality gets sacrificed.
Another issue is that stories are often inconsistent. Even 1 point stories or small stories can vary wildly. also, there are other factors we need to consider. Do we want velocity to increase? Is it the best choice to increase velocity? If we take a metric, which is a measure of health, and make it a target, we run the risk of doing more harm than good. An example of this is Goodhart's Law. He said, effectively "making a metric a target destroys the metric". There are a number of other measures that we could make, and one that Doc showed was "Cumulative Flow". This was a measure of Deployed, Ready for Approval, In Testing, In Progress, and Ready to Start. This was interesting, because when we graphed this out, we could much more clearly see where the bottleneck was over time. While, again, this is still a leading indicator, it's a much more vivid leading indicator; it's a measure of multiple dimensions and it shows over time what's really happening.
Balanced metrics help out a lot here, too. In short, measure more than one value in tandem. Take hours to consider alongside velocity and quality. What is likely to happen is that we can keep hours steady for a bit, and increase velocity for a bit, but quality takes a hit. Add a value like "Team Joy" and you may well see trends that help tell the truth of what's happening with your team.
Pete has decided to talk about ships that sank... figuratively. Thus, Pete decided to share some of his own disasters, and what we could learn from them. I couldn't help but chuckle when he was talking about being made a test lead for 100 developers... and he would be the sole tester for that initiative. No, really, I can relate! this first process was to replace an old school mainframe system with a modern feature friendly Windowed system, but using all of their old forms. Everyone loved it... except the people who had to enter the data! All the old forms were the same... except that error messages were rampant. The forms that were created were spreading one screen over five screens... and those five screens were not accounting for the fact that the form was expecting all the values to be filled in. The system was carefully tested, the odd behavior was referred back from the developers "as designed". Did the users approve of the design? Yes... if you mean the people who parse the data from the screens. What about the people who actually entered the data? They were not asked! Whoops! In short, the visual appeal trumped the functionality, and lost big time.
This was a neat exercise to show that the "user" is an amorphous creature. they are not all the same. They are not necessarily doing the same job, no do they have the same goals. Yet they are using the same application. In fact, the same person can do the same thing at different times of the day and have two totally different reactions. That happens more than we realie. We want our systems to support work, but often we make systems that impede it.
We are often guilty of systematic bias: we think that the outcome we desire is the one that should happen. this is also called "Works As Designed" Syndrome. In short, the people making the rules beter be involved in the actions that are being performed. If we want to make sure that the system works well for the users entering the data, we have to include them in the process. Pete used a military phrase to describe this process in a colorful metaphor... "when the metal meets the meat!" that means that YOU are part of the process, and if YOU are part of the process, and if the rules apply to you, then YOU will be part of the PROCESS. In short, if the system is being designed for you, YOU will be the one involved in making sure the design works for YOU.
Additionally, when you are dealing with systems, know that one component cannot be tested completely and no testing being done on other components is not going to end well. People do not see "the system" the same way you do. There is no single user. This is why personas ar so important. their experiences are intertwined, but they are different. even people doing similar processes and jobs are very often unique, and have their own needs based on location and focus.
The question around UX and UX stuff is that there is a fair amount of training required to do their job well. Hoe many testers have actually received any formal UX training? For that matter, how many testers have received any formal testing training from their company? [Hilarity ensues].
On a more serious note, someone recommended handing the official users manual to their testers and let them loose for awhile... and that's actually not at all a bad idea for some introductory UX training. Why? Because a users manual is (truthfully) the closest to a requirements doc many people will ever get their hands on! In my experience (and many other people's) a users manual is both a good guide and almost usually wrong (or at least inconsistent). A lot of questions will develop from this experience, and the tester will learn a lot by doing this. Interestingly, many services are being made with no manual whatsoever, so we're even losing that possibility.
Ultimately, we need to do all we can to make sure that we do the best we can when it comes to representing our actual users, as many of them as humanly possible. there are many ways to do that, and it's important to extend the net as broadly as possible. Buy doing so, while we can't guarantee we will hit every possible issue, we can minimize a great number of them.
---
You all thought I must be done, but you'd be WRONG!!! One more session, and that is with Rose City SPIN. Michael "Doc" Norton of Lean Dog is presenting a talk called "Agile Velocity is NOT the Goal!" Agile velocity is the measure of a given number of stories and the amount of work time it takes to complete them. According to Doc, Velocity is actually a training indicator. What this means is that we have to wait for something to happen before we know that something has happened. We use past data to help us predict future results.
Very often we end up "planning by velocity". Our prior performance helps us forecast what we might be able to do with our next iteration. In some ways, this is an arbitrary value, and is about as effective as trying to determine the weather tomorrow based on the weather today, or yesterday. Do you feel confident making your planning this way? What if you are struggling? What if you are on-boarding new team members? What if you are losing a team member to another company.
One possible approach to looking at all the information and forecasting is to instead use standard deviation (look it up statistics geeks, I just saw the equation and vaguely remember it from college). Good news, the slides from this talk will be available so you can see the math, but in this case, what we are seeing is a representative example of what we know right now. Standard deviation puts us between 16 and 18 iterations. that may feel uncomfortable, but it's going to be much more realistic.
One of the dangers in in this, and other statistical approaches, is that we run a dangerous risk when we try to manage an organization on numbers alone. On one side, we have the Hawthorne Effect, which is "That which is measured, will improve". The danger is that something that is not being measured is getting sacrificed. Very often, when we measure velocity and try to improve the velocity, quality gets sacrificed.
Another issue is that stories are often inconsistent. Even 1 point stories or small stories can vary wildly. also, there are other factors we need to consider. Do we want velocity to increase? Is it the best choice to increase velocity? If we take a metric, which is a measure of health, and make it a target, we run the risk of doing more harm than good. An example of this is Goodhart's Law. He said, effectively "making a metric a target destroys the metric". There are a number of other measures that we could make, and one that Doc showed was "Cumulative Flow". This was a measure of Deployed, Ready for Approval, In Testing, In Progress, and Ready to Start. This was interesting, because when we graphed this out, we could much more clearly see where the bottleneck was over time. While, again, this is still a leading indicator, it's a much more vivid leading indicator; it's a measure of multiple dimensions and it shows over time what's really happening.
Balanced metrics help out a lot here, too. In short, measure more than one value in tandem. Take hours to consider alongside velocity and quality. What is likely to happen is that we can keep hours steady for a bit, and increase velocity for a bit, but quality takes a hit. Add a value like "Team Joy" and you may well see trends that help tell the truth of what's happening with your team.
Monday, October 8, 2012
Live From Portland, It's PNSQC!!!
In my ever running tradition of Live Blogging events, I'm going to do my part to try to live blog PNSQC and the observations, updates and other things I see while I am here. Rather than do a tweet-stream that will just run endlessly, I decided a awhile ago that having an imprecise, scattered and lousy first draft level stream of consciousness approach, followed by cleaning up and making a more coherent presentation later, to be a fairly good compromise. So, if you would like to follow along with me, please feel free to just load this page and hit refresh every hour or so :).
---
Let's see, first thing I should mention is the really great conversations I had yesterday with Pete Walen, Ben Simo and Gary Masnica, who joined me for various assorted meals as I came into Portland and got settled for this week. I want to also thank Portland in general for being a simply laid out city with lots of options for food and diversion in a very short distance. It reminds me of San Francisco without the triangular grids :).
We had a breakfast this morning for all of the people that helped review papers and get them whipped together for the proceedings. Seeing as the proceedings book is a 500 page plus collection of the talks, presentations and other details that help make a collection of the conference, and for may an official way to get ideas published and disseminated, this was a great chance to meet the very people that helped make these papers be the quality that they are. We had some fun discussions, one of which was based around the "do not bend" instructions placed on my poster tube. Yes, we figured out ways that we could more effectively damage the tube beyond trying to bend it ;).
---
Right now, we are getting things started, and Matt Heusser will be giving the opening keynote talk. Matt has stated on Twitter that this is the talk of his life. I'm excited to hear it. The goal of Matt's talk is to discuss "The History of the Quality Movement and What Software Should Do About It". Much of software's issues stem from its analogs to physical manufacturing, as Matt brought us all the way back to Taylor and Bethel Steel and the way that they carried "Pig Iron" across the yard, so that they could be more efficient (and save the men's backs). By measuring the way that the men were carrying the steel, they worked through the methods necessary to triple productivity (and making a lot more money). Matt has taken this legendary myth and broken it down to show the inconsistencies of the story for this "one right way" that has been championed ever since. The idea of separating the worker from the work persists, though. Henry Ford and the Hawthorne experiments followed later. The idea was that the people assembling telecommunications switches in the 20's, they examined how changing the environment would improve productivity. In fact, just about any change improved productivity. The cynical response is "you will improve any aspect that you directly measure". The more human reply is "if you let people know that you care of their welfare, they're likely to improve just because they know you care!"
World War II sets the stage for America's ascendency, not because America was so fantastic in how it did stuff, but because we were really the only game in town still standing. Europe was bombed, Japan was destroyed, and Russia was gobbling up its hemisphere and locking out everyone else. This made a unique environment where many parts of the world were looking to rebuild and the U.S. was exporting talent and know how all over the world. One of those guys , as testers know, was W. Edwards Deming. Deming didn't find many people interested in "Quality" in the U.S., as they had customers coming out of their ears. Japan, however, welcomed Deming with open arms, and Deming took a walk through their environments to see what was really happening. By discovering the challenges faced earlier on, the quality of the product will improve. To get there, though, you have to go where the real work happens.
Cute question... if you have a dress that's supposed to be 50% cotton and 50% wool, and you take a top, which is 100% cotton, and a skirt that is 100% wool, and you sew the two together... does that meet the requirement of a 50/50 blend ;)? That may sound silly on the surface, but truth be told, that could be seen as meeting the requirements. It shows that language is vague, and we have to go beyond the ideas of "standards" when we make our definitions, so that common sense has a chance to be applied. The most famous of the examples from Deming of course comes from Toyota. Toyota took Deming's ideas and applied the Japanese concepts of "Kai-zen", or "Continuous Improvement". They took the approach that they wanted to do all they could to improve the just in time delivery of building automobiles.
Womack and Jones come to Japan to see why they (the Japanese Manufacturing movement) are suddenly clobbering American Manufacturing, with "Lean manufacturing" growing out of Toyota's process improvements. From here, ideas like one piece flow, reduced cycle time, and the "Kanban" process (the manufacturing approach, not the current agile methodology) grew out of these experiments and observations. The point is, we looked at the systems that we could standardize, but we took out the human element that made it actually work. Meanwhile, back in the states, we decide to look at International Standards and Six-Sigma. Standards are not bad. They're kind of essential if you want to buy a AA battery and make sure it will work. That's a good thing! Making sure a Duracell or Energizer delivers 1.5 volts per unit, consistently, that matters a lot. Standards help make that happen. Process standards are a bit more fuzzy. Does it make sense, really? Sometimes it does, but often it doesn't.
The idea of Six-Sigma works for manufacturing when it comes to very specific processes, but when we apply it to areas like software defects, and how to predict this amorphous moving product, Six-Sigma doesn't really apply. Very often, those proposing Six-Sigma for software don't have the slightest idea as to why the analogues for manufacturing of physical hardware products don't match up to actual software applications. The ideas of functions, processes and methods seem to make sense, until we actually try to apply it to really creating code and how code actually works (and no, I will not pretend that I understand how code actually works. Many minds far better than me have admitted that even they don't know how it *really* works!). The biggest factor that we need to re-consider (and that seemed to be lost from the Deming discussions with Toyota and others) was the fact that the Japansese companies did not remove the human element from the work. Here in the U.S., we did (or certainly did everything we could to try). To an extent, some stuff can be measured and controlled, but many things cannot be so easily tweaked.
This is all great, but what can we do with all of this? Matt wanted to make sure we went home with some stuff we could use immediately, and one of the ideas he said we could all jump on immediately is the idea of operational definitions. I had to jump in here and say that this reminded me of Elisabeth Hendrickson's recent writing in her Exploratory Testing book on Exploring Requirements (yes, I'm giving Elisabeth a plug here, its my blog, deal with it ;) ). If you don't think this is important, consider the recent Presidential debate we had. Two sides saying each other is lying, while each side believed while heartedly that they were telling the truth. How can both be right? When they are using different definitions, it's very possible. Agreeing on the definitions helps a lot. Cool thing is, it's a lot easier to agree on definitions in software than it is in politics. Thus, it's often important to realize that definitions of Quality suffers from the same problems.
Some other things we can do? Start a book club and discuss these ideas. By getting the principal players together to discuss a book, you can take out politics and divisions, and just focus on the ideas. Even if you can't completely remove the politics, you can minimize it, and focus on the issue, and give everyone a chance to contribute to addressing it. Another thing we can do is start taking our own "Gemba walks". take the time to look at the real issues facing your team or organization. People doing the work need to feel like they can improve it themselves. When they just "follow the process" they have effectively given up. Do what you can to defeat that defeatism. Nice! Good takeaway and great talk. Last minute details, if you have the chance to send a team member to a conference, do what you can to help them have that experience. If they are deeply engaged in the work, they are likely to come home energized and full of ideas. Take the chance to mess with a few of them. Worst case, you may do something that doesn't work. Best case, it just might work :). What we as attendees need to STOP doing is saying "I want permission to do this". Don't ask permission, just do it. If you yield good results, show them. If they don't yield good results, hey, you tried something and learned a little more. Better luck next time. Even that is a better outcome than doing nothing.
---
The program for the conference is all over the map (as well as spread over three floors, so if I didn't get to the talk you were most interested in, my apologies. For the 10:15 AM talk, I thought it would be interesting to check out "An Action Model for Risk" with Jeffrey A. Robinson. This is an excerpt of an 8 hour tutorial, but the question can be distilled to the following... after we train people on all of these quantitative techniques for a year, we then require two years to un-train them from what we've taught them! They follow the procedures to the letter, they can analyze problems, they can analyze risk, but they cannot actually make a decision. The reason? They analyze the risks and then they freak out! Sometimes, too much analysis causes paralysis! At times, we need to just not analyze, we need to just do. We very often come to a conclusion based on a little poking and prodding, not with a deep and comprehensive analysis. Root cause analysis is helpful, but very often, it's the very wrong thing to do at the time the problem occurs. Analysis makes sense later, but it often doesn't make any sense to do it in the heat of an issue. Assigning blame can come later; right now, let's get the problem fixed!
Risk is not the same thing as uncertainty. Sometimes we confuse the two. What does risk really mean? Is it the cost of failure? Is it the loss of time? Is it the loss of money? Is it the loss of prestige or customer "good feeling"? Risk is the cost of acting (or not acting). Uncertainty is a very different animal. It's non-linear, and it's related to probability. If you have a good feeling of probable good or probable bad, uncertainty is low. If you don't know the probability, uncertainty is at its maximum, and that's when analysis is helpful (even necessary!).
Great example, a washer failed at a company, so bad that they couldn't tell what the washer was. The company's mandate was to do a reverse engineering analysis to determine what the part was. Did that make sense? Ridiculous! Why, because with a credit card and a trip to ACE Hardware, they bought a graduation of washer sizes for $20, tried out each of them until they found the one that fit, and returned the rest. An hour of time, a $0.25 part, and they are up and running. Pretty cool example of saying "chuck the risk and try a solution NOW!"
There are several other examples that are a lot less trivial and a lot more potentially expensive. When we have a genuine risk of expense either way, and making a choice is ging to be one kind of expensive, but another will be some unknown but likely expensive option, then it is time to make an analysis, with the idea of the analysis to either reduce the risk, or reduce the uncertainty. Guess what? a Six-Signa checklist is not likely to really help you reach a good conclusion. If the uncertainty is low and the cost is low, just freakin' DO IT.
As the Risk or uncertainty rises, then the need from analysis goes up. When your product has the potential of "DEATH", it's OK to analyze. PLEASE, analyse! Even in these environments, there is the possibility of overdoing it. They can be so conservative as to prevent any action or innovation whatsoever. Let's face it, many of us are not working on nuclear reactor software or the microcode to drive a pacemaker, so let's stop pretending that we are. That doesn't mean we abandon all thoughts of rick or uncertainty, but it does mean we need to address the risks realistically, and our analysis should be apropriate, and sometimes that means it's not warranted at all! If your boat is sinking, root cause analysis is not required at this time. Swimming for your life, or finding something that floats, is!
---
The next talk i decided to check out was from "Professional Passport", or at least that was what the lead off slide portrayed. Initially, I was skeptical; I thought this was going to be a product pitch or product demo, but that wasn't the case. "Cultural Intelligence: From Buzz Word to Biz Mark", presented by Valerie Berset-Price, spoke to the idea that the world's inter-connectivity has changed the way that we communicate with others and the way we interact with others from different cultures. I know that this is true and I have experience with this not so much through work, but through BBST. By imagining working with interconnected teams, what will work look like in the future? Will work be regional, or will it be global? Can we develop a global mindset that is not a "my way or the highway" approach? Is it necessary to share norms to be effective?
The Lewis model is the breakdown between three continuums to help determine how to interact, generally, with a business contact. the Linear Active (blue), the Multi-Active (brown) and the Reactive (yellow) corners of the triangle roughly place the cultures of a variety of countries. The ways that these countries and their business people interact with each other and with people around the world vary, and they are in different places on the continuum. The point? It helps to know where people generally lie on the continuum. Are they transactional or relationship oriented? Is their culture egalitarian or do they have a stratified hierarchy? Do they celebrate the individual or the group? Are they Driven or are they Live For Today?
These differences are real, they are cultural and they matter. if you work with people who are from a different part of the continuum, it's important to realize that people react differently based on these cultural factors. The U.S.A., political rhetoric aside, is actually extremely egalitarian and individualistic. Many people come from countries where groupist approaches are more normal, where the hierarchy is well established, where the idea of a highly educated individual reporting to a person with lesser scholastic achievement is absurd. How do you deal with those dynamics? What does it mean to actually live in these environments? How do we handle things like individual praise vs group involvement?
What happens when you go to a country and you have to drive on the opposite side of the road (and the opposite side of the car)? It messes with your sense of reality, and when you do so, you turn off the radio, you turn off the phone, you don't talk, you just drive! It shakes up your system, and you must learn how top deal with it, or risk crashing. Other cultures are the same way. The problem is not as dire as when we drive in another country, but our reactions should be the same. We need to turn off our auto pilot, we need to stop expecting people to behave the way we do. We need to learn a little "When in Rome".
This doesn't mean shut off our morals, our compass, or our common sense. It does mean understanding what is important to another person from another culture. Don't think that just because someone speaks fluent English, that they get everything you say. I know full well just how little my Japanese gets me beyond "where's the train station, where's food, where can I buy Manga?" (I'm kidding a little; I studied Spanish and German when i was a kid, and I still get some of the gist of what people are saying in those languages, and that alone has made it very real to me how challenging it can be to communicate with other cultures).
---
During the lunch hour, Matt Heusser and I gave a talk about Testing Challenges, specifically how to use testing challenges to mentor prospective testers and help see areas in which they could potentially learn and develop. For those who have read my blog in the past, you are somewhat familiar with Miagi-do, which is the grassroots school of software testing that we are both affiliated with, and which Matt founded a few years ago.
This is the first time that we have done this kind of a talk, and peeled back some of the veil of secrecy that surrounds the school. Well, OK, it's not really secret, we just don't advertise it. We walked the participants through a few testing challenges and discussed how those challenges could be used with helping mentor other testers.
Matt and I took turns discussing different challenges, ranging from a salt shaker to getting my cell phone to play music. For those who want the answers to these challenges, well, you'll have to ask someone who was there ;). Seriously, though, it was neat to have the opportunity to share some of these challenges in a public setting and encourage others to use our model and approach with themselves or their teams.
---
For the afternoon, I am shifting my focus from speaker to moderator. Currently, we are listening to the Automation track, and I'm hearing a talk from Shilpa Ranganathan and Julio Lins (of Microsoft) about some of the challenges they face with regards to Testing on Multiple Platforms. Both Shilpa and Julio are from Microsoft's Lync team. Leaving the server aspect of the product out of the equation, there are many possible platforms that the software is meant to run on: Windows client (.net, Silverlight), web clients, and mobile clients for iPhone and Windows 7 phone (hmmm, no Android?). It would seem like a no brainer that making tests that could be reused would be an obvious win.
That makes sense, of course, but what we would like to do vs. what actually happens are often very different. It as interesting to see the ways in which they structured their testing and design so that they could reuse as much as possible. As their talk makes clear, it's not possible to completely reuse their code on all platforms, but for as many cases as possible, it makes sense. This gels well with my personal efforts to "write once, test anywhere" approach, and while it's not entirely possible to do this in all circumstances, it certainly makes it easier to do so where you can. When in doubt, smaller test cases are usually more portable than all encompassing cases. Also, making efforts to stabilize platforms made more sense than to try to just copy test code between environments.
---
For my second moderated talk, I am listening to Vijay Upadya, also from Microsoft, discuss ideas behind Dynamic Instrumentation, and how it can be used with State Based Testing. The idea is that there is a specific state that needs to be met and established for a test to be valid. Often, we cannot fully control those state transitions, or catch them at the critical moment that they happen, because ultimately they happen in the background, and rarely on the timeline that we have set. Dynamic instrumentation uses the idea that, through code and binary manipulation, we can set up these states as we need them to occur. Sounds tricky? Aparently, yes, it is.
Vijay talked about this approach with the project he was working with while Microsoft was developing SkyDrive. By using a library called Detours, they were able to rewrite memory to create specific state conditions as needed, and most importantly, in a repeatable manner. I'll have to admit, part of me is intrigued by this, part of me is disappointed that this is a Microsoft specific technology (nothing against Microsoft, but now I'm wondering is there's any way to do this in the Ruby space (will have to investigate later this evening when I get some free time :) ).
---
With more of my involvement with mobile devices, I found the idea of a more mobile discussion interesting, so Billy Landowsky's "Shake 'n' Send" seems interesting. Think about this... how do we get reasonable user feedback from our mobile users? How can we get real and effective feedback about how these apps are developed? The screen size limitations, single app focus and the very "walled garden" realities of many of these applications makes for a challenging environment to develop for, much less test (this was brought home to me this weekend specifically as we did a Weekend Testing event specific to mobile app testing and the challenges we face.
In desktop applications, there are "Collect Logs" options, or a customer feedback program that occasionally polls the server and sends updates. On a mobile carrier, that kind of communication model is not as desirable (not to mention, for some, impossible, as network connectivity is not always guaranteed).
Sure, there's a lot of disadvantages... but what about the pluses that could be leveraged? The solution developed for the Windows 7 Mobile devices is called Shake 'n' Send. The accelerometer in the device can detect that activity to send feedback to the MS servers. It can also queue up email so that, when a connection is available, the OS takes care of it so that, when there is a connection, the messages can be sent.
By shaking the phone, the feedback can be sent that ranges from "Happy" to "Bug" with trace info and diagnostic data included. The bigest benefit to the user; they never have to leave the application. Gotta' admit, this sounds like an intriguing idea. I'd be curious to see how this develops over time.
---
The last official track talk of the day is one I have wanted to see for a long time. When I first came to PNSQC in 2012, I was moderating another track, so couldn't see it. Last year, I was laid up with a broken leg and rehabilitating, and couldn't attend PNSQC at all. Thus, I made sure to keep my schedule clear so that I could actually watch these high school students from Lincoln High School in Portland discuss "Engineering Quality in the Robotic World", and chart their discoveries and growth over the past several years (their club has been active since 2005). Coming from the First Lego League and their basic level of software development to a more advanced robotic development and programming has been a fascinating journey.
Each of the teenagers (Ida Chow, MacLean Freed, Vicki Niu, Ethan Takla and Jeffry Wang, with their sponsor Kingsum Chow) showed how they were able to develop their robots, attempt and overcome a variety of challenges, and ultimately share the lessons learned from their discoveries and competitions. I had a chance to interview several of the members of this team during the Poster Paper presentation in 2010, and it's cool to see how they have developed their skills since that interview two years ago. Seriously inspiring, and a bit humbling, too. Props to them for their presentation skills as well, as they have become quite good. Now, to their presentation... One of the cool things to see that they overcame were limitations in design, as well as how they were able to make changes from plastic lego blocks to machined metal, finer motor controls and a more powerful programming platform (using Robotic C).
Hearing this topic is neat in and of itself. Hearing five high school kids talk to this level and with this level of confidence kicks it up to just short of amazing. Seriously, these kids are impressive, and barring some catastrophic derailing of focus or attention, I think they will have a magnificent future in this industry :).
---
We had a great evening reception and I had many opportunities to discuss my paper's points and my own topic of "Get the Balance Right: Acceptance Test Driven Development, GUI Automation and Exploratory Testing". Each group I discussed it with helped me to se an additional facet I had not directly considered, so much so that I think there's a lot more potential life in these ideas beyond just a paper and a presentation, so stay tuned for more on that front. With that, it's time for dinner, so I'm going to see out one of many of Portland's famous food trucks and call it a night. See you all tomorrow!
---
Let's see, first thing I should mention is the really great conversations I had yesterday with Pete Walen, Ben Simo and Gary Masnica, who joined me for various assorted meals as I came into Portland and got settled for this week. I want to also thank Portland in general for being a simply laid out city with lots of options for food and diversion in a very short distance. It reminds me of San Francisco without the triangular grids :).
We had a breakfast this morning for all of the people that helped review papers and get them whipped together for the proceedings. Seeing as the proceedings book is a 500 page plus collection of the talks, presentations and other details that help make a collection of the conference, and for may an official way to get ideas published and disseminated, this was a great chance to meet the very people that helped make these papers be the quality that they are. We had some fun discussions, one of which was based around the "do not bend" instructions placed on my poster tube. Yes, we figured out ways that we could more effectively damage the tube beyond trying to bend it ;).
---
Right now, we are getting things started, and Matt Heusser will be giving the opening keynote talk. Matt has stated on Twitter that this is the talk of his life. I'm excited to hear it. The goal of Matt's talk is to discuss "The History of the Quality Movement and What Software Should Do About It". Much of software's issues stem from its analogs to physical manufacturing, as Matt brought us all the way back to Taylor and Bethel Steel and the way that they carried "Pig Iron" across the yard, so that they could be more efficient (and save the men's backs). By measuring the way that the men were carrying the steel, they worked through the methods necessary to triple productivity (and making a lot more money). Matt has taken this legendary myth and broken it down to show the inconsistencies of the story for this "one right way" that has been championed ever since. The idea of separating the worker from the work persists, though. Henry Ford and the Hawthorne experiments followed later. The idea was that the people assembling telecommunications switches in the 20's, they examined how changing the environment would improve productivity. In fact, just about any change improved productivity. The cynical response is "you will improve any aspect that you directly measure". The more human reply is "if you let people know that you care of their welfare, they're likely to improve just because they know you care!"
World War II sets the stage for America's ascendency, not because America was so fantastic in how it did stuff, but because we were really the only game in town still standing. Europe was bombed, Japan was destroyed, and Russia was gobbling up its hemisphere and locking out everyone else. This made a unique environment where many parts of the world were looking to rebuild and the U.S. was exporting talent and know how all over the world. One of those guys , as testers know, was W. Edwards Deming. Deming didn't find many people interested in "Quality" in the U.S., as they had customers coming out of their ears. Japan, however, welcomed Deming with open arms, and Deming took a walk through their environments to see what was really happening. By discovering the challenges faced earlier on, the quality of the product will improve. To get there, though, you have to go where the real work happens.
Cute question... if you have a dress that's supposed to be 50% cotton and 50% wool, and you take a top, which is 100% cotton, and a skirt that is 100% wool, and you sew the two together... does that meet the requirement of a 50/50 blend ;)? That may sound silly on the surface, but truth be told, that could be seen as meeting the requirements. It shows that language is vague, and we have to go beyond the ideas of "standards" when we make our definitions, so that common sense has a chance to be applied. The most famous of the examples from Deming of course comes from Toyota. Toyota took Deming's ideas and applied the Japanese concepts of "Kai-zen", or "Continuous Improvement". They took the approach that they wanted to do all they could to improve the just in time delivery of building automobiles.
Womack and Jones come to Japan to see why they (the Japanese Manufacturing movement) are suddenly clobbering American Manufacturing, with "Lean manufacturing" growing out of Toyota's process improvements. From here, ideas like one piece flow, reduced cycle time, and the "Kanban" process (the manufacturing approach, not the current agile methodology) grew out of these experiments and observations. The point is, we looked at the systems that we could standardize, but we took out the human element that made it actually work. Meanwhile, back in the states, we decide to look at International Standards and Six-Sigma. Standards are not bad. They're kind of essential if you want to buy a AA battery and make sure it will work. That's a good thing! Making sure a Duracell or Energizer delivers 1.5 volts per unit, consistently, that matters a lot. Standards help make that happen. Process standards are a bit more fuzzy. Does it make sense, really? Sometimes it does, but often it doesn't.
The idea of Six-Sigma works for manufacturing when it comes to very specific processes, but when we apply it to areas like software defects, and how to predict this amorphous moving product, Six-Sigma doesn't really apply. Very often, those proposing Six-Sigma for software don't have the slightest idea as to why the analogues for manufacturing of physical hardware products don't match up to actual software applications. The ideas of functions, processes and methods seem to make sense, until we actually try to apply it to really creating code and how code actually works (and no, I will not pretend that I understand how code actually works. Many minds far better than me have admitted that even they don't know how it *really* works!). The biggest factor that we need to re-consider (and that seemed to be lost from the Deming discussions with Toyota and others) was the fact that the Japansese companies did not remove the human element from the work. Here in the U.S., we did (or certainly did everything we could to try). To an extent, some stuff can be measured and controlled, but many things cannot be so easily tweaked.
This is all great, but what can we do with all of this? Matt wanted to make sure we went home with some stuff we could use immediately, and one of the ideas he said we could all jump on immediately is the idea of operational definitions. I had to jump in here and say that this reminded me of Elisabeth Hendrickson's recent writing in her Exploratory Testing book on Exploring Requirements (yes, I'm giving Elisabeth a plug here, its my blog, deal with it ;) ). If you don't think this is important, consider the recent Presidential debate we had. Two sides saying each other is lying, while each side believed while heartedly that they were telling the truth. How can both be right? When they are using different definitions, it's very possible. Agreeing on the definitions helps a lot. Cool thing is, it's a lot easier to agree on definitions in software than it is in politics. Thus, it's often important to realize that definitions of Quality suffers from the same problems.
Some other things we can do? Start a book club and discuss these ideas. By getting the principal players together to discuss a book, you can take out politics and divisions, and just focus on the ideas. Even if you can't completely remove the politics, you can minimize it, and focus on the issue, and give everyone a chance to contribute to addressing it. Another thing we can do is start taking our own "Gemba walks". take the time to look at the real issues facing your team or organization. People doing the work need to feel like they can improve it themselves. When they just "follow the process" they have effectively given up. Do what you can to defeat that defeatism. Nice! Good takeaway and great talk. Last minute details, if you have the chance to send a team member to a conference, do what you can to help them have that experience. If they are deeply engaged in the work, they are likely to come home energized and full of ideas. Take the chance to mess with a few of them. Worst case, you may do something that doesn't work. Best case, it just might work :). What we as attendees need to STOP doing is saying "I want permission to do this". Don't ask permission, just do it. If you yield good results, show them. If they don't yield good results, hey, you tried something and learned a little more. Better luck next time. Even that is a better outcome than doing nothing.
---
The program for the conference is all over the map (as well as spread over three floors, so if I didn't get to the talk you were most interested in, my apologies. For the 10:15 AM talk, I thought it would be interesting to check out "An Action Model for Risk" with Jeffrey A. Robinson. This is an excerpt of an 8 hour tutorial, but the question can be distilled to the following... after we train people on all of these quantitative techniques for a year, we then require two years to un-train them from what we've taught them! They follow the procedures to the letter, they can analyze problems, they can analyze risk, but they cannot actually make a decision. The reason? They analyze the risks and then they freak out! Sometimes, too much analysis causes paralysis! At times, we need to just not analyze, we need to just do. We very often come to a conclusion based on a little poking and prodding, not with a deep and comprehensive analysis. Root cause analysis is helpful, but very often, it's the very wrong thing to do at the time the problem occurs. Analysis makes sense later, but it often doesn't make any sense to do it in the heat of an issue. Assigning blame can come later; right now, let's get the problem fixed!
Risk is not the same thing as uncertainty. Sometimes we confuse the two. What does risk really mean? Is it the cost of failure? Is it the loss of time? Is it the loss of money? Is it the loss of prestige or customer "good feeling"? Risk is the cost of acting (or not acting). Uncertainty is a very different animal. It's non-linear, and it's related to probability. If you have a good feeling of probable good or probable bad, uncertainty is low. If you don't know the probability, uncertainty is at its maximum, and that's when analysis is helpful (even necessary!).
Great example, a washer failed at a company, so bad that they couldn't tell what the washer was. The company's mandate was to do a reverse engineering analysis to determine what the part was. Did that make sense? Ridiculous! Why, because with a credit card and a trip to ACE Hardware, they bought a graduation of washer sizes for $20, tried out each of them until they found the one that fit, and returned the rest. An hour of time, a $0.25 part, and they are up and running. Pretty cool example of saying "chuck the risk and try a solution NOW!"
There are several other examples that are a lot less trivial and a lot more potentially expensive. When we have a genuine risk of expense either way, and making a choice is ging to be one kind of expensive, but another will be some unknown but likely expensive option, then it is time to make an analysis, with the idea of the analysis to either reduce the risk, or reduce the uncertainty. Guess what? a Six-Signa checklist is not likely to really help you reach a good conclusion. If the uncertainty is low and the cost is low, just freakin' DO IT.
As the Risk or uncertainty rises, then the need from analysis goes up. When your product has the potential of "DEATH", it's OK to analyze. PLEASE, analyse! Even in these environments, there is the possibility of overdoing it. They can be so conservative as to prevent any action or innovation whatsoever. Let's face it, many of us are not working on nuclear reactor software or the microcode to drive a pacemaker, so let's stop pretending that we are. That doesn't mean we abandon all thoughts of rick or uncertainty, but it does mean we need to address the risks realistically, and our analysis should be apropriate, and sometimes that means it's not warranted at all! If your boat is sinking, root cause analysis is not required at this time. Swimming for your life, or finding something that floats, is!
---
The next talk i decided to check out was from "Professional Passport", or at least that was what the lead off slide portrayed. Initially, I was skeptical; I thought this was going to be a product pitch or product demo, but that wasn't the case. "Cultural Intelligence: From Buzz Word to Biz Mark", presented by Valerie Berset-Price, spoke to the idea that the world's inter-connectivity has changed the way that we communicate with others and the way we interact with others from different cultures. I know that this is true and I have experience with this not so much through work, but through BBST. By imagining working with interconnected teams, what will work look like in the future? Will work be regional, or will it be global? Can we develop a global mindset that is not a "my way or the highway" approach? Is it necessary to share norms to be effective?
The Lewis model is the breakdown between three continuums to help determine how to interact, generally, with a business contact. the Linear Active (blue), the Multi-Active (brown) and the Reactive (yellow) corners of the triangle roughly place the cultures of a variety of countries. The ways that these countries and their business people interact with each other and with people around the world vary, and they are in different places on the continuum. The point? It helps to know where people generally lie on the continuum. Are they transactional or relationship oriented? Is their culture egalitarian or do they have a stratified hierarchy? Do they celebrate the individual or the group? Are they Driven or are they Live For Today?
These differences are real, they are cultural and they matter. if you work with people who are from a different part of the continuum, it's important to realize that people react differently based on these cultural factors. The U.S.A., political rhetoric aside, is actually extremely egalitarian and individualistic. Many people come from countries where groupist approaches are more normal, where the hierarchy is well established, where the idea of a highly educated individual reporting to a person with lesser scholastic achievement is absurd. How do you deal with those dynamics? What does it mean to actually live in these environments? How do we handle things like individual praise vs group involvement?
What happens when you go to a country and you have to drive on the opposite side of the road (and the opposite side of the car)? It messes with your sense of reality, and when you do so, you turn off the radio, you turn off the phone, you don't talk, you just drive! It shakes up your system, and you must learn how top deal with it, or risk crashing. Other cultures are the same way. The problem is not as dire as when we drive in another country, but our reactions should be the same. We need to turn off our auto pilot, we need to stop expecting people to behave the way we do. We need to learn a little "When in Rome".
This doesn't mean shut off our morals, our compass, or our common sense. It does mean understanding what is important to another person from another culture. Don't think that just because someone speaks fluent English, that they get everything you say. I know full well just how little my Japanese gets me beyond "where's the train station, where's food, where can I buy Manga?" (I'm kidding a little; I studied Spanish and German when i was a kid, and I still get some of the gist of what people are saying in those languages, and that alone has made it very real to me how challenging it can be to communicate with other cultures).
---
During the lunch hour, Matt Heusser and I gave a talk about Testing Challenges, specifically how to use testing challenges to mentor prospective testers and help see areas in which they could potentially learn and develop. For those who have read my blog in the past, you are somewhat familiar with Miagi-do, which is the grassroots school of software testing that we are both affiliated with, and which Matt founded a few years ago.
This is the first time that we have done this kind of a talk, and peeled back some of the veil of secrecy that surrounds the school. Well, OK, it's not really secret, we just don't advertise it. We walked the participants through a few testing challenges and discussed how those challenges could be used with helping mentor other testers.
Matt and I took turns discussing different challenges, ranging from a salt shaker to getting my cell phone to play music. For those who want the answers to these challenges, well, you'll have to ask someone who was there ;). Seriously, though, it was neat to have the opportunity to share some of these challenges in a public setting and encourage others to use our model and approach with themselves or their teams.
---
For the afternoon, I am shifting my focus from speaker to moderator. Currently, we are listening to the Automation track, and I'm hearing a talk from Shilpa Ranganathan and Julio Lins (of Microsoft) about some of the challenges they face with regards to Testing on Multiple Platforms. Both Shilpa and Julio are from Microsoft's Lync team. Leaving the server aspect of the product out of the equation, there are many possible platforms that the software is meant to run on: Windows client (.net, Silverlight), web clients, and mobile clients for iPhone and Windows 7 phone (hmmm, no Android?). It would seem like a no brainer that making tests that could be reused would be an obvious win.
That makes sense, of course, but what we would like to do vs. what actually happens are often very different. It as interesting to see the ways in which they structured their testing and design so that they could reuse as much as possible. As their talk makes clear, it's not possible to completely reuse their code on all platforms, but for as many cases as possible, it makes sense. This gels well with my personal efforts to "write once, test anywhere" approach, and while it's not entirely possible to do this in all circumstances, it certainly makes it easier to do so where you can. When in doubt, smaller test cases are usually more portable than all encompassing cases. Also, making efforts to stabilize platforms made more sense than to try to just copy test code between environments.
---
For my second moderated talk, I am listening to Vijay Upadya, also from Microsoft, discuss ideas behind Dynamic Instrumentation, and how it can be used with State Based Testing. The idea is that there is a specific state that needs to be met and established for a test to be valid. Often, we cannot fully control those state transitions, or catch them at the critical moment that they happen, because ultimately they happen in the background, and rarely on the timeline that we have set. Dynamic instrumentation uses the idea that, through code and binary manipulation, we can set up these states as we need them to occur. Sounds tricky? Aparently, yes, it is.
Vijay talked about this approach with the project he was working with while Microsoft was developing SkyDrive. By using a library called Detours, they were able to rewrite memory to create specific state conditions as needed, and most importantly, in a repeatable manner. I'll have to admit, part of me is intrigued by this, part of me is disappointed that this is a Microsoft specific technology (nothing against Microsoft, but now I'm wondering is there's any way to do this in the Ruby space (will have to investigate later this evening when I get some free time :) ).
---
With more of my involvement with mobile devices, I found the idea of a more mobile discussion interesting, so Billy Landowsky's "Shake 'n' Send" seems interesting. Think about this... how do we get reasonable user feedback from our mobile users? How can we get real and effective feedback about how these apps are developed? The screen size limitations, single app focus and the very "walled garden" realities of many of these applications makes for a challenging environment to develop for, much less test (this was brought home to me this weekend specifically as we did a Weekend Testing event specific to mobile app testing and the challenges we face.
In desktop applications, there are "Collect Logs" options, or a customer feedback program that occasionally polls the server and sends updates. On a mobile carrier, that kind of communication model is not as desirable (not to mention, for some, impossible, as network connectivity is not always guaranteed).
Sure, there's a lot of disadvantages... but what about the pluses that could be leveraged? The solution developed for the Windows 7 Mobile devices is called Shake 'n' Send. The accelerometer in the device can detect that activity to send feedback to the MS servers. It can also queue up email so that, when a connection is available, the OS takes care of it so that, when there is a connection, the messages can be sent.
By shaking the phone, the feedback can be sent that ranges from "Happy" to "Bug" with trace info and diagnostic data included. The bigest benefit to the user; they never have to leave the application. Gotta' admit, this sounds like an intriguing idea. I'd be curious to see how this develops over time.
---
The last official track talk of the day is one I have wanted to see for a long time. When I first came to PNSQC in 2012, I was moderating another track, so couldn't see it. Last year, I was laid up with a broken leg and rehabilitating, and couldn't attend PNSQC at all. Thus, I made sure to keep my schedule clear so that I could actually watch these high school students from Lincoln High School in Portland discuss "Engineering Quality in the Robotic World", and chart their discoveries and growth over the past several years (their club has been active since 2005). Coming from the First Lego League and their basic level of software development to a more advanced robotic development and programming has been a fascinating journey.
Each of the teenagers (Ida Chow, MacLean Freed, Vicki Niu, Ethan Takla and Jeffry Wang, with their sponsor Kingsum Chow) showed how they were able to develop their robots, attempt and overcome a variety of challenges, and ultimately share the lessons learned from their discoveries and competitions. I had a chance to interview several of the members of this team during the Poster Paper presentation in 2010, and it's cool to see how they have developed their skills since that interview two years ago. Seriously inspiring, and a bit humbling, too. Props to them for their presentation skills as well, as they have become quite good. Now, to their presentation... One of the cool things to see that they overcame were limitations in design, as well as how they were able to make changes from plastic lego blocks to machined metal, finer motor controls and a more powerful programming platform (using Robotic C).
Hearing this topic is neat in and of itself. Hearing five high school kids talk to this level and with this level of confidence kicks it up to just short of amazing. Seriously, these kids are impressive, and barring some catastrophic derailing of focus or attention, I think they will have a magnificent future in this industry :).
---
We had a great evening reception and I had many opportunities to discuss my paper's points and my own topic of "Get the Balance Right: Acceptance Test Driven Development, GUI Automation and Exploratory Testing". Each group I discussed it with helped me to se an additional facet I had not directly considered, so much so that I think there's a lot more potential life in these ideas beyond just a paper and a presentation, so stay tuned for more on that front. With that, it's time for dinner, so I'm going to see out one of many of Portland's famous food trucks and call it a night. See you all tomorrow!
Subscribe to:
Posts (Atom)