Don't ask me why this came to mind today, but as we were visiting friends down in Fresno this weekend, we were out walking and talking about things, and I lamented that I hadn't had time to do certain things anymore the way I used to. I joked that I didn't have time to play video games, watch TV or read books for the fun of it anymore. As we were talking about this, I made the comment that Al Pacino makes in The Godfather, Part III... "every time I try to get out, they pull me right back in!" (I'm paraphrasing that, btw. I'm probably saying it wrong :) ).
The point I was making, though was the fact that, about 18 months ago, I was basically just working a job. I was testing software, and i was doing OK at it. Nothing terrible, but nothing stellar either. I was doing what I needed to do, and at the end of the day, I went home and did just about anything else I wanted to do. I played video games with my kids. I read whatever book I wanted to read (usually non-technical). I often spent hours at my computer watching Anime or reading Manga. It was fun, and it was a diversion of sorts.
Looking back, I'm somewhat amazed that I had that much time on my hands, because I really didn't feel like I did. I always felt like I wasn't able to get "important" things done. What important things? I honestly didn't know, but they had to be out there somewhere; I'm sure they didn't involve watching marathon sessions of "DeathNote" (not that there's anything wrong with that ;) ).
An interesting thing started happening when I started writing this blog and making some commitments. I found that there were important things I could do. In fact, there were so many important things, I could drown in them. I won't rehash the steps that I went through or the commitments I decided to take on to help get me out of software testing as a job and to the point where I found myself becoming an evangelist for testing, but I will say that there is a price for evangelism!
One of the fundamental truths I learned is that time is a non-renewable commodity, and there's no way to change its course. It can't be slowed down or sped up; it's can't be stopped, it can't be banked, and contrary to a lot of literature out there, it can't be managed (really, it can't!). There's only one thing people can do with time, and that's use it. Period. If you choose to do something, you will have to choose not to do something else. It's that simple... and yet, when we commit to doing more and getting more involved, it does seem that opportunities come to make it seem like we can do so much more with our time than we ever imagined we could.
A friend of mine posted a cute Facebook status today... "There are three great motivations in life; we do things out of fear, we do things out of duty, and we do things out of love. Frankly, I want to focus my energy on the third!" I think she's on to something here.
Many of us do a lot of things out of fear. We don't want to be seen as flakes, we don't want to get into trouble, we don't want to incur technical debt. Fear is a strong motivator, to a point, but it's one of the areas where "The Resistance" can work against us. When we fear something, all we have to do is discover a greater fear, and The Resistance will help us retreat to a spot that is comfortable (for those not familiar with my frequent abuse of the terms "The Resistance" and "The Lizard Brain", read Seth Godin's book "Linchpin". All will be revealed :) ).
Duty is a more powerful motivator than fear. I will often do things out of duty and a feeling that I should be doing this. We often approach our jobs with that sense of duty, because it's what we were hired to do. We have an obligation to do our best work. we have a sense of commitment, a sense of purpose, and we want to live up to what we have been asked to do. Duty can be overcome, though; when we feel overwhelmed, when we get exhausted, we can decide that, meh, maybe it's not so important after all (our "Lizard Brain" decides that even duty isn't insurmountable if it wants to hide out and play it safe).
Love, however, can genuinely trump everything else. When we approach something because we love it, nothing gets in the way! I'll give my example of producing the TWiST podcasts. At first, fear was the motivator, or in this case, the flip side of fear, excitement. I was excited that I was going to do something that I hadn't done before, and really, I had no idea how I would do it! There was a rush of fear, but a rush of excitement, too. I could fail at this, but I wasn't so scared that I shied away from it. After several weeks of hit and miss discoveries, I realized that I was in a position where, if I decided I didn't want to do this any longer, the podcast would have serious problems getting posted on a weekly basis. Could I be replaced? Sure, but it might take awhile, and in that intervening time, those people involved would be affected. I felt responsible to them, and thus, I had a duty to make sure that these podcasts got completed on time. That lasted a few weeks, and then the love of the project took over. Now, getting an interview notice is a highlight of my week. I honestly can't wait to work on it sometimes, and I have to actually tell myself "OK, but hang on, there are other things that have to be done first". Oftentimes, I've had to stop myself from editing the podcast because I was using that time as an avoidance mechanism of something else I knew I should be doing.
My title is meant to be a little in jest, but it really does have a bearing on the things that I do. I am motivated to do things for people and causes that make me anxious. In this case, I do things with a sense of fear, because I don't want something else to happen. It's not a great long term motivator, though. I'm definitely motivated to do things out of my duty for the job that I do. As a Tester, as a Scoutmaster, it's the role that provides the motivation. When we get to love, though, it's always the relationship with the people that's the driver. Not abstract love, but genuine people that we speak to and interact with in some meaningful way. Get to that level, and seriously, it's amazing how willing to move Heaven and Earth one becomes to meet a goal. Thus, if you have a goal you know you need to accomplish, but it scares you enough to really keep you from going for it, try to find a way to "keep it in the Family", to get it out of the realm of fear and duty, and into the realm of something you love to do.
Sunday, February 27, 2011
Friday, February 25, 2011
TWiST #34 - with Dawn Cannan
I’m trying to use a lighter hand on these interviews… as many of you know, I’m a bit OCD when it comes to Um’s and ah’s and other “stutters”, and usually, I work aggressively to edit them out.
While I feel that it makes for a more polished podcast recording, it also takes a lot of time to do it to that level, and in some ways, I think it may be too much work for too little benefit (a law of diminishing returns, so to speak). Thus, I’m experimenting with a more natural feel and focusing more on editing the flow of the show and the dialogue, and less on the ultra anal-retentive “speech police” attitude. BTW, this is not meant in any way to reflect on the speakers or their interview styles. Everyone does this to some level, myself included. More than anything else, I’d be interested in hearing back from some of you if you feel that the lighter hand is better, worse, or you really can’t tell the difference.
This week’s interview is with Dawn Cannan, who you can follow on Twitter as @dckismet and read her blog over at http://www.passionatetester.com/. Dawn focuses a lot of this interview on ideas related to writing about testing, why she chose to take the technical track rather than going the management route, the joys and challenges of working with distributred teams (specifically her role as a remote worker in that environment). She also spends some time talking about Selenesse, which is the blending ground and approach of using Selenium and Fitnesse together. In any event, if you’d like to listen to Episode, 34, by all means please do :).
Standard disclaimer:
Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.
TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.
Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).
Thursday, February 24, 2011
Wednesday Book Review: Stephen King “On Writing”
Yes, to my astute readers, you will notice that this is being written on a Thursday, but I typically do my book reviews on Wednesday and I so hate to break up a set :).
Having said that, some might look at the title of this week’s review and think “Huh? What does that have to do with software testing?” In my typical manner, my reply is “it has nothing to do with it and everything to do with it”.
A little history… one of my favorite and most oft repeated podcast listens has to be, without a doubt, Merlin Mann and Jon Gruber’s talk at SxSW 2009 entitled “HowTo: 149 Surprising Ways to TurboCharge Your Blog with Credibility”. Yes, this was a somewhat silly title, but it has proven to be a treasure trove of information related to my desire to develop TESTHEAD as a blog that means something and depicts something I’m passionate about as opposed to a way to make money (which is good, because my blog doesn’t make me any money, so I’m covered there :) ). In this talk, Merlin discussed Stephen King's book “On Writing” and said there are two types of people; those who have never read “On Writing” and then get irritated about people who talk about it, and those who have read it and said “It changed my game!”
That was far too ringing an endorsement and a challenge for me to pass up, so I ordered it around Christmas and promptly put it on the stack of books I would get to reading as soon as I transitioned into my new job and got my bearings.
So is “On Writing” really all that?! In a short word, the answer is “yes” but I’m not going to stop with a short word (otherwise, what’s the value in a review?!). In more lengthy terms, “On Writing” is a practitioner's love letter to the craft, written in the way that only Stephen King can write it. First, one thing to get out of the way; you do not have to be a fan of Stephen King’s work to appreciate this book, though I can imagine it would certainly help. Me personally, I like some of his stuff and other books of his, meh! Truth be told, I feel that way about most authors, including my personal favorites (such as Orson Scott Card).
Still, as a blogger, I can appreciate the effort that goes into practicing the craft of writing. As a somewhat late blooming writer who is striving to get work published (one chapter for a book and an article for a web periodical is to date the sum total of my accepted and published or pending published body of work, but I’d love opportunities to do more), I realized that, even if I am much more focused on writing non-fiction and technical commentary, many of the same rules apply. To this end, Stephen King offers a lot of great insight, a working toolkit for an aspiring, author, writer, or blogger to get acquainted with, and some concepts that help writers tell a more compelling and interesting story.
My goal as a blogger is to not write fiction. In fact, my goal is the exact opposite, to share real experiences and write about real ideas, events and people (OK, occasionally I change names to protect the identities of participants or to meet the requirements that I not violate an NDA while I create a blog post) but I still am bound by the same rules and expectations. Ultimately, I’m here to tell a story, and my story needs to be compelling and interesting, else why would you or anyone else bother to read it? To this end, King’s advice is excellent. He makes the point that there are rules of the road. There are grammar rules that are best to be followed, there are style rules that are helpful and vital to the process, but that’s about the extent that King focuses on the rules. King describes early on in the book his own upbringing and the events that lead him to the present day. He handles this in about the first 15% of the book, and a small section towards the end of the book, but otherwise spends very little time on personal matters. In short, he sets the stage for why you would care to read about his ideas.
Next, he describes what he calls his writers toolbox, which has a few well used and well crafted tools, but those tools are powerful when used correctly. Vocabulary is one, but not to the point where it’s overwhelming or pretentious. Use your vocabulary the way you would use it, don’t just pick out words because they look good, because they will come of as both stuffy and inauthentic. Focus on good grammar, and exercise it sparingly. Get to the point, be direct, and when in doubt, be more direct and lean rather than pad things just because they sound good or feel important. Focuses on style, and here’s where the meat of the book comes into play. Not a lot of style rules, but the ones that actually matter. Lose the passive voice (a lesson I had drilled into me as I prepped my chapter for the Cost of Testing book), structure paragraphs so that they are inviting, not impenetrable and dense. Above all, edit and be brief when in doubt.
One of the valuable aspects that I appreciate is the idea of identifying your "Ideal Reader". Merlin put this very succinctly in the SxSW talk by saying that when he writes, instead of trying to put something out that he thinks will get a lot of page views, he aims to make what he writes something that will appeal to a specific person, or as he put it, he wants to have someone he respects look at what he is writing and not think it’s a load of crap. I do much the same thing. Who is my ideal reader you might ask? It’s anyone in the Twitter-verse who has taken the time to read my stuff and comment on it. It’s the Miagi-Do ka (people like Ajay Balamurugadas, Markus Gartner and Matt Heusser), it's the frequent readers who come back time and time again to leave comments and offer encouragement (Albert Gareev, Shmuel Gershon, Devon Smith, Adam Yuret and numerous others) and other testing writers that I have a great deal of respect for, such as James and Jon Bach, Elizabeth Hendrickson, Doug Hoffman, Cem Kaner and Bret Pettichord. The fact that I might be writing something that some of these people would dig, yeah, that motivates me!
King’s philosophy about writing is that there is a large pool of bad writers, a slightly smaller pool of competent writers, an even smaller pool of good writers, and a really small pool of great writers. He doesn’t believe the bad writers will get to be good, or that the good writers will get to be great, at least not by the advice in a book. He does however, believe there’s a lot that can be done to bridge the gap between competent and good, and in his estimation, most people who are willing to put the effort in fall into that camp (and by extension, I’m hoping I do as well; I’d like to believe I do). King emphasizes that good writing is not necessarily super polished literary work that would necessarily gain critical praise, but it is writing that is authentic and sounds and feels natural.
King is often quoted as saying the first draft is done with the door closed, and the second draft is done with the door open. By closing the door, we focus on getting our thoughts out and fleshed out. From there, we then welcome our readers in and ask for their input for the second draft. In truth, I do this and I don’t do this, because very often, my first draft goes right onto the blog. Still, I think it’s good advice, in that most ideas should be created and written in isolation, and then opened up for others consideration after the main idea is down.
Additionally, to be a good writer, one has to write. A lot. They also have to read. A lot. Having a desire to write, but saying you don’t have time to read is basically killing the purpose. Additionally, by reading, we learn what we want to do and what we don’t want to do (the good with the bad, or even just the style we don’t want to use).
I like the closing advice of the section, which is that if you are going to write, write for the right reasons. I apply this to the blog in this manner: I write because I want an active memory of my experiences that I can go back and reference and remind myself where I’ve been and where I want to go. Additionally, I want to be a warning to others (LOL)! What I mean is that I post both good things and bad things in this blog, my successes and failures, triumphs and frustrations, in the hope that some young tester down the road will learn from my mistakes and not have to repeat all of my screw-ups!
Bottom Line:
You may or may not be a fan of King’s body of work, but you don’t have to be to appreciate the value of “On Writing”. It’s an authentic, earthy, and very real view into the mind and life of Stephen King and the experiences that have shaped him (his early life, his successes and failures, his alcoholism and recovery and the accident in 1999 that nearly ended his life). Most of all, it shares the message that writing just plain matters to him, and after a career of almost 40 years, he’s figured out a few things that would be worth looking at and learning. Even if your goal is not to write the heir-apparent to The Stand (which, if anyone cares, is my all time favorite Stephen King book), or write fiction at all, give “On Writing” a read. For those who have said “it changed my game”, I’m not sure I’m at that point just yet, but I have a glimmer of an understanding as to why it receives the praise it gets.
Having said that, some might look at the title of this week’s review and think “Huh? What does that have to do with software testing?” In my typical manner, my reply is “it has nothing to do with it and everything to do with it”.
A little history… one of my favorite and most oft repeated podcast listens has to be, without a doubt, Merlin Mann and Jon Gruber’s talk at SxSW 2009 entitled “HowTo: 149 Surprising Ways to TurboCharge Your Blog with Credibility”. Yes, this was a somewhat silly title, but it has proven to be a treasure trove of information related to my desire to develop TESTHEAD as a blog that means something and depicts something I’m passionate about as opposed to a way to make money (which is good, because my blog doesn’t make me any money, so I’m covered there :) ). In this talk, Merlin discussed Stephen King's book “On Writing” and said there are two types of people; those who have never read “On Writing” and then get irritated about people who talk about it, and those who have read it and said “It changed my game!”
That was far too ringing an endorsement and a challenge for me to pass up, so I ordered it around Christmas and promptly put it on the stack of books I would get to reading as soon as I transitioned into my new job and got my bearings.
So is “On Writing” really all that?! In a short word, the answer is “yes” but I’m not going to stop with a short word (otherwise, what’s the value in a review?!). In more lengthy terms, “On Writing” is a practitioner's love letter to the craft, written in the way that only Stephen King can write it. First, one thing to get out of the way; you do not have to be a fan of Stephen King’s work to appreciate this book, though I can imagine it would certainly help. Me personally, I like some of his stuff and other books of his, meh! Truth be told, I feel that way about most authors, including my personal favorites (such as Orson Scott Card).
Still, as a blogger, I can appreciate the effort that goes into practicing the craft of writing. As a somewhat late blooming writer who is striving to get work published (one chapter for a book and an article for a web periodical is to date the sum total of my accepted and published or pending published body of work, but I’d love opportunities to do more), I realized that, even if I am much more focused on writing non-fiction and technical commentary, many of the same rules apply. To this end, Stephen King offers a lot of great insight, a working toolkit for an aspiring, author, writer, or blogger to get acquainted with, and some concepts that help writers tell a more compelling and interesting story.
My goal as a blogger is to not write fiction. In fact, my goal is the exact opposite, to share real experiences and write about real ideas, events and people (OK, occasionally I change names to protect the identities of participants or to meet the requirements that I not violate an NDA while I create a blog post) but I still am bound by the same rules and expectations. Ultimately, I’m here to tell a story, and my story needs to be compelling and interesting, else why would you or anyone else bother to read it? To this end, King’s advice is excellent. He makes the point that there are rules of the road. There are grammar rules that are best to be followed, there are style rules that are helpful and vital to the process, but that’s about the extent that King focuses on the rules. King describes early on in the book his own upbringing and the events that lead him to the present day. He handles this in about the first 15% of the book, and a small section towards the end of the book, but otherwise spends very little time on personal matters. In short, he sets the stage for why you would care to read about his ideas.
Next, he describes what he calls his writers toolbox, which has a few well used and well crafted tools, but those tools are powerful when used correctly. Vocabulary is one, but not to the point where it’s overwhelming or pretentious. Use your vocabulary the way you would use it, don’t just pick out words because they look good, because they will come of as both stuffy and inauthentic. Focus on good grammar, and exercise it sparingly. Get to the point, be direct, and when in doubt, be more direct and lean rather than pad things just because they sound good or feel important. Focuses on style, and here’s where the meat of the book comes into play. Not a lot of style rules, but the ones that actually matter. Lose the passive voice (a lesson I had drilled into me as I prepped my chapter for the Cost of Testing book), structure paragraphs so that they are inviting, not impenetrable and dense. Above all, edit and be brief when in doubt.
One of the valuable aspects that I appreciate is the idea of identifying your "Ideal Reader". Merlin put this very succinctly in the SxSW talk by saying that when he writes, instead of trying to put something out that he thinks will get a lot of page views, he aims to make what he writes something that will appeal to a specific person, or as he put it, he wants to have someone he respects look at what he is writing and not think it’s a load of crap. I do much the same thing. Who is my ideal reader you might ask? It’s anyone in the Twitter-verse who has taken the time to read my stuff and comment on it. It’s the Miagi-Do ka (people like Ajay Balamurugadas, Markus Gartner and Matt Heusser), it's the frequent readers who come back time and time again to leave comments and offer encouragement (Albert Gareev, Shmuel Gershon, Devon Smith, Adam Yuret and numerous others) and other testing writers that I have a great deal of respect for, such as James and Jon Bach, Elizabeth Hendrickson, Doug Hoffman, Cem Kaner and Bret Pettichord. The fact that I might be writing something that some of these people would dig, yeah, that motivates me!
King’s philosophy about writing is that there is a large pool of bad writers, a slightly smaller pool of competent writers, an even smaller pool of good writers, and a really small pool of great writers. He doesn’t believe the bad writers will get to be good, or that the good writers will get to be great, at least not by the advice in a book. He does however, believe there’s a lot that can be done to bridge the gap between competent and good, and in his estimation, most people who are willing to put the effort in fall into that camp (and by extension, I’m hoping I do as well; I’d like to believe I do). King emphasizes that good writing is not necessarily super polished literary work that would necessarily gain critical praise, but it is writing that is authentic and sounds and feels natural.
King is often quoted as saying the first draft is done with the door closed, and the second draft is done with the door open. By closing the door, we focus on getting our thoughts out and fleshed out. From there, we then welcome our readers in and ask for their input for the second draft. In truth, I do this and I don’t do this, because very often, my first draft goes right onto the blog. Still, I think it’s good advice, in that most ideas should be created and written in isolation, and then opened up for others consideration after the main idea is down.
Additionally, to be a good writer, one has to write. A lot. They also have to read. A lot. Having a desire to write, but saying you don’t have time to read is basically killing the purpose. Additionally, by reading, we learn what we want to do and what we don’t want to do (the good with the bad, or even just the style we don’t want to use).
I like the closing advice of the section, which is that if you are going to write, write for the right reasons. I apply this to the blog in this manner: I write because I want an active memory of my experiences that I can go back and reference and remind myself where I’ve been and where I want to go. Additionally, I want to be a warning to others (LOL)! What I mean is that I post both good things and bad things in this blog, my successes and failures, triumphs and frustrations, in the hope that some young tester down the road will learn from my mistakes and not have to repeat all of my screw-ups!
Bottom Line:
You may or may not be a fan of King’s body of work, but you don’t have to be to appreciate the value of “On Writing”. It’s an authentic, earthy, and very real view into the mind and life of Stephen King and the experiences that have shaped him (his early life, his successes and failures, his alcoholism and recovery and the accident in 1999 that nearly ended his life). Most of all, it shares the message that writing just plain matters to him, and after a career of almost 40 years, he’s figured out a few things that would be worth looking at and learning. Even if your goal is not to write the heir-apparent to The Stand (which, if anyone cares, is my all time favorite Stephen King book), or write fiction at all, give “On Writing” a read. For those who have said “it changed my game”, I’m not sure I’m at that point just yet, but I have a glimmer of an understanding as to why it receives the praise it gets.
Tuesday, February 22, 2011
San Francisco Selenium Meetup for 02/21/2011
With much thanks to OPower in San Francisco for providing the meeting location, and SauceLabs for providing the food and drink as always, the first meeting for 2011 of the Selenum Meetup group for SF got underway. For those not familiar with the meet-up approach, there are groups all over the country (heck, all over the world) that have group meetings about technologies, hobbies, crafts, you name it. This group happens to be about Selenium, and thus all things and technologies that surround Selenium.
Tonight’s session was specifically dedicated to dealing with Selenium problems, and issues that people have had with Selenium. The speakers for these events are usually volunteers in the community and likewise are developers and testers actively using the tools.
One of the cool things about this group is that the first thing they do is announce who’s hiring. Since so many people at these groups have the skills or are actively building them. Three companies announced they were looking and several with multiple open positions (what a wonderful thing to see, there seems to be a bit of a tech boom going on South of Market :) ).
Eric Allen from SauceLabs covered some topics regarding Selenium RC ‘s Proxy Server, starting with the architecture to help users understand how everything talks to each other. The talk went through details specific to capturing network traffic, which helps with debugging and even some performance monitoring. One of the cool little tools he talks about was trustAllSSLCertificates, which allows testers to create a "valid" certificate to test internal SSL setups. Note, this is specific to testing, and is not recommended at all for production environments (LOL!). If you have any questions for Eric about this, you can get to him at @ericpallen on Twitter.
Dan Fabulich from RedFin discussed an approach to test files on Disk with Selenium. The biggest issue with Selenium is "flakiness" (guess what? this was the #1 question from my development team). How do we get around the flakiness issue? Don't test the site, test files on disk directly! Say what?! Yep, instead of going to a site directly, create a system that works on files directly. Another benefit is consistent timing. By having the file loading on disk, you can reliably determine how long it will take to load the files. This approach also eliminates dependencies. If most of the testing is done through external services, each of the tests are going to fail, and having local files do the work removes all of those external issues. Another benefit of running files on disk is that you can eliminate what are referred to as "dirty tests". This eliminates failures that happen because tests cannot access a changed items somewhere external. There's lots of other options that Dan explained and I'm just not fast enough to type up all of them, but suffice it to say, this is an interesting idea as a supplement to testing and focusing on unit tests and local integration tests. Clever stuff :)! Oh, and RedFin... they're hiring :)!!!
Lalitha Padubidri and Leena Ananthayya from Riverbed discussed some issues surrounding WebUI automation. She discussed some challenges specific to Riverbed and some of the building blocks they use for automation. One of the interesting aspects about Riverbeds products is that they are not testing a web site, they are testing a network appliance. Ideally tests should be reusable, scalable and easy to learn. They use a lot of data driven methods. To do this, they use a lot of data abstraction methods that allow for a lot of the components to be made into widgets that can be called as needed. By using "factory design patterns", the code can be shared among many products and scripts. By using these techniques, they can transmute 50 basic tests out to 810 tests run across different browsers and products. they do have some gotchas that they are working with and around such as a lack of screen shot capture and selenium Grid reliability on VM's, but they are making strides. Oh, and if you haven't already guessed.. Riverbed... is Hiring :)!
Alois Reitbauer from dynaTrace discussed the idea of Extending Selenium for performance testing (sorry, dynaTrace is not currently hiring, but hey, 75% of presenters hiring is a pretty awesome percentage :) ). The example shown is just a simple test that goes out to Google and searches for DynaTrace. By adding three addition environment variable, dynaTrace's agent can record performance data (granted, this is of limited benefit if you do not have dynaTrace's performance tools, but it's kinda' cool to be able to see how Selenium can grab performance data and help automate performance criteria in tests). What's interesting to see from all of these presentations is that each group uses Selenium a little but differently, and how Selenium can help disparate tools work better together, or even leverage components from proprietary tools and make them work better (in this case, the idea that Selenium can be used to automate other tools in addition to web tests is rather compelling.
Granted, each of these discussions are being done in lightning talk fashion, so there isn't much time to go into great depth on each of these topics, but it's exciting to see what other groups are doing with Selenium and, yes, even openly discussing the challenges they have faced implementing it. Some of the methods are quite clever, and some are methods I never considered (gotta' admit, I'm actually really interested in doing a local file on disk approach to tests; maybe that will be the key secret to helping us look some of the "flakiness" aspects our developers have commented about. Is it a perfect solution? Probably not, but it's an interesting one ;).
Again, my thanks to everyone from the broader Bay Area Selenium community for making these events both memorable and accessible. While I can't make each and every one of them, I try to get to as many of them as I can. The community as a whole help to make these events possible, so my thanks to everyone who helps put on these events. I always learn something new, and even if I don't always understand everything discussed, if I can walk away with one fresh idea to try, that is a success in my book.
Tonight’s session was specifically dedicated to dealing with Selenium problems, and issues that people have had with Selenium. The speakers for these events are usually volunteers in the community and likewise are developers and testers actively using the tools.
One of the cool things about this group is that the first thing they do is announce who’s hiring. Since so many people at these groups have the skills or are actively building them. Three companies announced they were looking and several with multiple open positions (what a wonderful thing to see, there seems to be a bit of a tech boom going on South of Market :) ).
Eric Allen from SauceLabs covered some topics regarding Selenium RC ‘s Proxy Server, starting with the architecture to help users understand how everything talks to each other. The talk went through details specific to capturing network traffic, which helps with debugging and even some performance monitoring. One of the cool little tools he talks about was trustAllSSLCertificates, which allows testers to create a "valid" certificate to test internal SSL setups. Note, this is specific to testing, and is not recommended at all for production environments (LOL!). If you have any questions for Eric about this, you can get to him at @ericpallen on Twitter.
Dan Fabulich from RedFin discussed an approach to test files on Disk with Selenium. The biggest issue with Selenium is "flakiness" (guess what? this was the #1 question from my development team). How do we get around the flakiness issue? Don't test the site, test files on disk directly! Say what?! Yep, instead of going to a site directly, create a system that works on files directly. Another benefit is consistent timing. By having the file loading on disk, you can reliably determine how long it will take to load the files. This approach also eliminates dependencies. If most of the testing is done through external services, each of the tests are going to fail, and having local files do the work removes all of those external issues. Another benefit of running files on disk is that you can eliminate what are referred to as "dirty tests". This eliminates failures that happen because tests cannot access a changed items somewhere external. There's lots of other options that Dan explained and I'm just not fast enough to type up all of them, but suffice it to say, this is an interesting idea as a supplement to testing and focusing on unit tests and local integration tests. Clever stuff :)! Oh, and RedFin... they're hiring :)!!!
Lalitha Padubidri and Leena Ananthayya from Riverbed discussed some issues surrounding WebUI automation. She discussed some challenges specific to Riverbed and some of the building blocks they use for automation. One of the interesting aspects about Riverbeds products is that they are not testing a web site, they are testing a network appliance. Ideally tests should be reusable, scalable and easy to learn. They use a lot of data driven methods. To do this, they use a lot of data abstraction methods that allow for a lot of the components to be made into widgets that can be called as needed. By using "factory design patterns", the code can be shared among many products and scripts. By using these techniques, they can transmute 50 basic tests out to 810 tests run across different browsers and products. they do have some gotchas that they are working with and around such as a lack of screen shot capture and selenium Grid reliability on VM's, but they are making strides. Oh, and if you haven't already guessed.. Riverbed... is Hiring :)!
Alois Reitbauer from dynaTrace discussed the idea of Extending Selenium for performance testing (sorry, dynaTrace is not currently hiring, but hey, 75% of presenters hiring is a pretty awesome percentage :) ). The example shown is just a simple test that goes out to Google and searches for DynaTrace. By adding three addition environment variable, dynaTrace's agent can record performance data (granted, this is of limited benefit if you do not have dynaTrace's performance tools, but it's kinda' cool to be able to see how Selenium can grab performance data and help automate performance criteria in tests). What's interesting to see from all of these presentations is that each group uses Selenium a little but differently, and how Selenium can help disparate tools work better together, or even leverage components from proprietary tools and make them work better (in this case, the idea that Selenium can be used to automate other tools in addition to web tests is rather compelling.
Granted, each of these discussions are being done in lightning talk fashion, so there isn't much time to go into great depth on each of these topics, but it's exciting to see what other groups are doing with Selenium and, yes, even openly discussing the challenges they have faced implementing it. Some of the methods are quite clever, and some are methods I never considered (gotta' admit, I'm actually really interested in doing a local file on disk approach to tests; maybe that will be the key secret to helping us look some of the "flakiness" aspects our developers have commented about. Is it a perfect solution? Probably not, but it's an interesting one ;).
Again, my thanks to everyone from the broader Bay Area Selenium community for making these events both memorable and accessible. While I can't make each and every one of them, I try to get to as many of them as I can. The community as a whole help to make these events possible, so my thanks to everyone who helps put on these events. I always learn something new, and even if I don't always understand everything discussed, if I can walk away with one fresh idea to try, that is a success in my book.
Saturday, February 19, 2011
Weekend Testing Americas #7: Smart Enough To Get Hired?
So this was a bit of a different approach to a Weekend Testing exercise. Albert Gareev and I had been commiserating over the fact that too often, after we had announced the testing mission and charter, testers would quickly take off. After a few minutes, someone would ask a clarification question, and it would make clear that we missed something and then, of course, those who stuck around to hear the clarification were able to work with that fresh knowledge. Those who went off to test, didn't have that fresh piece of news. Thus when we'd get together with the group for the debrief, invariably someone would comment on the fact that they didn't know that new piece of information was provided.
For this week's testing session, we wanted to try something different. Specifically, we phrased the challenge in a way that had little to do with the program itself, and more with the premise. We also said early on in the challenge that we didn't want to have people run off to test until they heard everything we were going to share. This had the desired effect of having everyone stick around until the mission had been explained.
The Game/Puzzle: Cross the River
Press the round blue button to begin. The goal is to get all eight people across the river. You must respect the following rules:
Hey, don't look at me, I didn't make the game ;).
The Mission:
“I’m hiring staff for my IT department. I was told that this simple program will help me in finding the smartest candidates. Your mission: test the program and report how it suits my needs!”
With that, a number of the testers stuck around and really grilled us. What was the point of the mission, why did we think that a game or puzzle would actually help weed out the smartest testers? While the flaws of this system were discussed, Albert and I noticed that the goal we had set early on was working. The testers were not running off to test, they were sticking around and actively questioning the mission!
We took the time to look at ways that we could question the claims of a product using a “focusing/defocusing” approach in testing. Along with this, we tried out some different approaches to discussing the problem. Albert would act as the "stakeholder" in the situation by iterating certain ideas and concepts by putting the text in quotes, such as "we have considered that, this is meant to help us automate the process of hiring as much as possible". This technique was used to make it so that it wasn't Albert or I leading the testers, it was the "stakeholder", and this way the testers would more readily question the claims. In the past, we had seen that if Albert or I had said that something was based on a particular approach, more times than not the testers would take us at our word. In this manner, they were much less likely to.
Another unique approach this time around was that we encouraged the use of hashtags in the chat session. the new one we introduced today was the #Danger hashtag. This was used to help identify potential hazards to the stakeholders during this testing session. If you'd like to see the Dangers discovered, as well as the #Issues we found with the process, please feel free to check out the chat transcript for details.
For this week's testing session, we wanted to try something different. Specifically, we phrased the challenge in a way that had little to do with the program itself, and more with the premise. We also said early on in the challenge that we didn't want to have people run off to test until they heard everything we were going to share. This had the desired effect of having everyone stick around until the mission had been explained.
The Game/Puzzle: Cross the River
Press the round blue button to begin. The goal is to get all eight people across the river. You must respect the following rules:
- Only two people can be on the raft at one time.
- Only the mother, father & officer can operate the raft.
- The mother cannot be left with the sons without the father (or she’ll beat them).
- The father cannot be left with the daughters without the mother (or he’ll beat them).
- The thief cannot be left with anyone without the officer (or there will be even more beat-downs).
Hey, don't look at me, I didn't make the game ;).
The Mission:
“I’m hiring staff for my IT department. I was told that this simple program will help me in finding the smartest candidates. Your mission: test the program and report how it suits my needs!”
With that, a number of the testers stuck around and really grilled us. What was the point of the mission, why did we think that a game or puzzle would actually help weed out the smartest testers? While the flaws of this system were discussed, Albert and I noticed that the goal we had set early on was working. The testers were not running off to test, they were sticking around and actively questioning the mission!
We took the time to look at ways that we could question the claims of a product using a “focusing/defocusing” approach in testing. Along with this, we tried out some different approaches to discussing the problem. Albert would act as the "stakeholder" in the situation by iterating certain ideas and concepts by putting the text in quotes, such as "we have considered that, this is meant to help us automate the process of hiring as much as possible". This technique was used to make it so that it wasn't Albert or I leading the testers, it was the "stakeholder", and this way the testers would more readily question the claims. In the past, we had seen that if Albert or I had said that something was based on a particular approach, more times than not the testers would take us at our word. In this manner, they were much less likely to.
Another unique approach this time around was that we encouraged the use of hashtags in the chat session. the new one we introduced today was the #Danger hashtag. This was used to help identify potential hazards to the stakeholders during this testing session. If you'd like to see the Dangers discovered, as well as the #Issues we found with the process, please feel free to check out the chat transcript for details.
Friday, February 18, 2011
TWiST #33 - with Jon Kohl
As can be expected, after awhile, we start to get “repeat offenders”. Selena Delesie did a paired interview with Lynn McKee, Jane Fraser and I both appeared in our own spots and at the Tester’s Dinner, but Jonathan Kohl is the first full TWiST interviewer to be considered a full blown repeat offender, meaning two shows where he’s been the only guest.
Jonathan originally appeared in Episode 7, and he’s been up to a lot in the ensuing six months since we last talked to him (plus, he wouldn’t have to suffer because of my trying to edit a podcast while actively chasing Scouts like ducks around Scout camp (really, I produced my second show for TWiST while I was up at Scout Camp. It was an interesting experience to say the least).
Jonathan Kohl covers a number of topics related to Mobile Testing. Having just gone through a big push to release a Mobile App, much of this talk was right on the money for things that I either experienced or wished I ‘d known at the time. But don’t take my word for it, go and have a listen to Episode 33 for yourself.
Standard disclaimer:
Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.
TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.
Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).
Wednesday, February 16, 2011
Wednesday Book Review: Selenium 1.0 Testing Tools: Beginner’s Guide
As many of you know, I am in the process of doing a long form review of this book for my blog under the PRACTICUM heading. For various reasons, that project is going more slowly than I intended (mostly because I’ve changed jobs and I’ve been strapped for time to get the final three chapters finished, but they are in process). However, I have read this book and feel it deserved its full and appropriate review ( Packt Publishing has been very nice to provide me a copy of the book, I feel it only appropriate to respond with a timely enough review to make it worth their while :) ).
Selenium 1.0 Testing Tools: Beginners Guide is exactly that. It is a book that is aimed at those who are beginners with the technology (not necessarily beginners to testing or coding). Overall, David Burns has written a good book here. The conversational style is helpful; you feel like you are talking to a team-mate who is explaining the system to you in a friendly and engaging manner. The exercises selected are of a level that you are eased into working with the technology and the tools. The first five chapters are dedicated to the Selenium IDE, which for many testers is all they will ever see of Selenium, and for many testers who want to automate front end tests on Firefox, it may be all they ever need. If that’s the case, the first five chapters will give you a lot of practical information and give you some solidly ninja levels skills to write tests and make them robust and solid. They will be limited to the Selenese format, but even with that limitation, there’s a lot of cool things that a tester can do, from declaring and using variables, calling JavaScript events, and managing for examining objects and creating robust tests to help enhance testing.
The second half of the book deals with Selenium Remote Control (RC) and here is where some deviation takes place, at least for me. First off, to be fair, let me get a couple of things out of the way. I did not have the exact environment described in this book. I was close, but I had some set-up differences that caused some headaches. A number of the examples in the book just flat out would not work for me. Since my environment was different than that recommended (a Windows 7 machine on a 64-bit Athlon processor), I can’t hold David responsible for my roadblocks. It does, however, show a critical issue with the Learn by Doing format of this book. If everything is the same, the format is terrific, and the first 6 chapters I was able to accomplish most of the project objectives and practice using the tools as listed. When I found myself stuck, there was little I could do to get around the issues. Be aware of that going in, your mileage may vary with the examples shown in the book.
Another frustration I had was specifically in Chapter 7, which dealt with writing your own scripts in Selenium RC. The chapter is structured around using the IntelliJ Idea Java IDE. In and of itself, this is not a bad practice, but when things start to not line up due to environments being different, adding an IDE into the mix can add even more complications and details to keep track of. While I appreciated the clean nature of integrating JUnit and the Selenium libraries under one roof, it added another layer that I personally felt might have been better handled by not including another tool. Agreed, CLI’s are not sexy and they require repetition that the IDE doesn’t, but I would have appreciated more specific examples and more complete examples. As it is, to get around the behaviors I was seeing, I generated tests in the IDE, saved the structure as a separate file, and then loaded that file to write my tests.
Again, these criticisms are actually pretty minor, and they are colored because my own environment had trouble with the examples (of which David states that 64 bit environments will have issues). Overall, this is a great first effort in describing the tools and the methods involved in loading, installing, and working with the Selenium stack. As the technology becomes more well known and solidifies, I’d like to see what David does with a future Selenium treatment. There’s a lot of gold in here, and a few pieces of pyrite (and again, I have to take responsibility for my own panning), but all in all it fills a void for documentation for Selenium with a continuity that a beginner can appreciate and get productive with quickly.
Selenium 1.0 Testing Tools: Beginners Guide is exactly that. It is a book that is aimed at those who are beginners with the technology (not necessarily beginners to testing or coding). Overall, David Burns has written a good book here. The conversational style is helpful; you feel like you are talking to a team-mate who is explaining the system to you in a friendly and engaging manner. The exercises selected are of a level that you are eased into working with the technology and the tools. The first five chapters are dedicated to the Selenium IDE, which for many testers is all they will ever see of Selenium, and for many testers who want to automate front end tests on Firefox, it may be all they ever need. If that’s the case, the first five chapters will give you a lot of practical information and give you some solidly ninja levels skills to write tests and make them robust and solid. They will be limited to the Selenese format, but even with that limitation, there’s a lot of cool things that a tester can do, from declaring and using variables, calling JavaScript events, and managing for examining objects and creating robust tests to help enhance testing.
The second half of the book deals with Selenium Remote Control (RC) and here is where some deviation takes place, at least for me. First off, to be fair, let me get a couple of things out of the way. I did not have the exact environment described in this book. I was close, but I had some set-up differences that caused some headaches. A number of the examples in the book just flat out would not work for me. Since my environment was different than that recommended (a Windows 7 machine on a 64-bit Athlon processor), I can’t hold David responsible for my roadblocks. It does, however, show a critical issue with the Learn by Doing format of this book. If everything is the same, the format is terrific, and the first 6 chapters I was able to accomplish most of the project objectives and practice using the tools as listed. When I found myself stuck, there was little I could do to get around the issues. Be aware of that going in, your mileage may vary with the examples shown in the book.
Another frustration I had was specifically in Chapter 7, which dealt with writing your own scripts in Selenium RC. The chapter is structured around using the IntelliJ Idea Java IDE. In and of itself, this is not a bad practice, but when things start to not line up due to environments being different, adding an IDE into the mix can add even more complications and details to keep track of. While I appreciated the clean nature of integrating JUnit and the Selenium libraries under one roof, it added another layer that I personally felt might have been better handled by not including another tool. Agreed, CLI’s are not sexy and they require repetition that the IDE doesn’t, but I would have appreciated more specific examples and more complete examples. As it is, to get around the behaviors I was seeing, I generated tests in the IDE, saved the structure as a separate file, and then loaded that file to write my tests.
Again, these criticisms are actually pretty minor, and they are colored because my own environment had trouble with the examples (of which David states that 64 bit environments will have issues). Overall, this is a great first effort in describing the tools and the methods involved in loading, installing, and working with the Selenium stack. As the technology becomes more well known and solidifies, I’d like to see what David does with a future Selenium treatment. There’s a lot of gold in here, and a few pieces of pyrite (and again, I have to take responsibility for my own panning), but all in all it fills a void for documentation for Selenium with a continuity that a beginner can appreciate and get productive with quickly.
The Evidence Before the Court...
So after several months of pondering if I was ready to take the plunge, I decided it was time. I took the jump and signed up for Bug Advocacy, the 2nd class in the mix for the Association for Software Testing’s Black Box Software Testing Series. I’ll confess that I’ve held back for a while on this for a variety of reasons.
The first was the fact that I had gotten involved in teaching the BBST Foundations series of classes, having taken the class as a participant in May and then assisting in teaching the class three times after that (and hopefully with more opportunities to teach going forward). The second reason is that, after seeing the rate of completion of Foundations vs. the rate of completion for Bug Advocacy, I was a little concerned that I’d wash out of it or not be able to dedicate the time necessary (especially after it became clear I was going to be changing jobs!).
However, I decided that neither of those were really good excuses, and that, if I was going to fail, let’s just find out. Much easier to de-fang the dragon by just taking it head on, and if I can’t de-fang it, well, let’s find out!
So having now been in the class for three days, my impressions are that it’s not a slouch class. There’s some meaty concepts here. Even after 17 years of active participation in testing, it’s interesting to note that there is still plenty of ambiguity regarding what a bug is and the best way to deal with it. The exercises and readings… are no joke. When I said that Foundations was the kind of meat that was missing in a number of the training options associated with testing, I can now honestly say that I’m now even more of a fan of the model that AST is using (and note, I didn’t get any kick-back for this, I paid $200 to participate and so far, I’m really glad that I have :) ).
The quizzes are still challenging, thought provoking and a bit frustrating, and the exercises interesting and challenging. The participants are a great bunch, many of whom I was either in class with as a participant or as one of their instructors (hope I wasn’t too hard on any of them, otherwise it might be “pay back time” ).
So a half-week into the first week of a new course, I’m cautiously optimistic. Either way, I’m having fun thinking about these things and I’ll be making regular posts that will talk about the class itself without giving away any “secrets” ;).
The first was the fact that I had gotten involved in teaching the BBST Foundations series of classes, having taken the class as a participant in May and then assisting in teaching the class three times after that (and hopefully with more opportunities to teach going forward). The second reason is that, after seeing the rate of completion of Foundations vs. the rate of completion for Bug Advocacy, I was a little concerned that I’d wash out of it or not be able to dedicate the time necessary (especially after it became clear I was going to be changing jobs!).
However, I decided that neither of those were really good excuses, and that, if I was going to fail, let’s just find out. Much easier to de-fang the dragon by just taking it head on, and if I can’t de-fang it, well, let’s find out!
So having now been in the class for three days, my impressions are that it’s not a slouch class. There’s some meaty concepts here. Even after 17 years of active participation in testing, it’s interesting to note that there is still plenty of ambiguity regarding what a bug is and the best way to deal with it. The exercises and readings… are no joke. When I said that Foundations was the kind of meat that was missing in a number of the training options associated with testing, I can now honestly say that I’m now even more of a fan of the model that AST is using (and note, I didn’t get any kick-back for this, I paid $200 to participate and so far, I’m really glad that I have :) ).
The quizzes are still challenging, thought provoking and a bit frustrating, and the exercises interesting and challenging. The participants are a great bunch, many of whom I was either in class with as a participant or as one of their instructors (hope I wasn’t too hard on any of them, otherwise it might be “pay back time” ).
So a half-week into the first week of a new course, I’m cautiously optimistic. Either way, I’m having fun thinking about these things and I’ll be making regular posts that will talk about the class itself without giving away any “secrets” ;).
Friday, February 11, 2011
TWiST #32 - with Brian Noggle (aka QAHatesYou)
When I heard the opening first minute of this particular podcast, and heard the first words from the interviewee, I thought to myself "OK, hang on a minute… is this who I think it is?!!"
For those reading the title, yes, this is an interview with test consultant Brian Noggle… OK, so what? We do interviews with test consultants all the time, why would this be special? Well, when I tell you that Brian Noggle is much better known, and in my case, somewhat beloved, by his alter ego, this might make a little more sense. Brian Noggle is "The Director", but he is even more well known for the internet personna he presents and that I love to follow, frequently retweet, and look forward to seeing tweets from and blog updates from.
QAHatesYou
Yes, Brian Noggle is the legendary QAHatesYou, and I realized quickly that this would be no ordinary interview, and Matt and Brian do not disappoint.
If you do not follow @QAHatesYou, follow him.
If you want to read both a funny and an insightful blog/website, add http://qahatesyou.com to your blog feed.
More to the point, if you want to hear a great interview with a bit of snark (but well placed snark), then listen to Episode 32.
Standard disclaimer:
Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.
TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.
Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).
Wednesday, February 9, 2011
Wednesday Book Review: How We Test Software at Microsoft (Afterward)
After so much time focusing on just this title, I figured a summary review was finally in order, so here it is, my overall review of How We Test Software at Microsoft.
This has been a long and sometimes challenging journey. When I set out to first do this project, I had a specific goal in mind, and I figured that I would apply this to my company and see what elements would be useful. Little did I know that, at the time I agreed to do this, that by the time I finished it, I would be operating in a totally different sphere, at a totally different company, in a totally different development environment.
First, a refresher on how this project came about. I was at the Pacific Northwest Software Quality Conference in October of 2010, and I met Alan Page as he was giving a talk on performing code reviews from the perspective of being a tester. At the end of the talk he was giving this book away to those who asked questions and participated. As he had the last copy he was giving out, I impetuously said “Hey Alan, if you give me a copy I promise to write a review about it!” He smiled and said “OK, you’re on” and with that, my copy of HWTSAM fell into my hands.
As I thought about how I wanted to do this, I remembered an approach that I thought was really interesting was done by Trent Hamm over at The Simple Dollar, which is a personal finance blog I’ve followed for several years. Trent is a voracious reader, and he would often do as many as two financial book reviews a week! Now, to be fair and provide some context and contrast, Trent is a professional blogger, The Simple Dollar is his primary job and source of livelihood, so he has the time to focus on that level of reading. TESTHEAD, while important to me and a tool I regularly use to help sharpen my craft as a tester, is not an income generator, nor is it meant to be. Likewise, it’s hard to work full time, raise a family and work on a blog to try to give it the attention it deserves. Two book reviews a week would be out of the question, and this book would be too dense to do as a one week review and a general list-out. Then I saw Trent’s Book Club entries and I said “A-ha! Now that’s an approach I can work with”. A chapter every few days shouldn’t be a problem!
Shortly after I started reviewing this book, my reality took a dramatic change. I was offered the chance to start a test group at a startup company, and I took it. Those who follow this blog know that's how I transitioned from Tracker Corp. to Sidereel and in the process, decided to move on from a company that I’d been with for six years. Needless to say, the challenge of managing that transition caused the updates to slow from twice a week to once a week, and then for a stretch I was only able to do two chapters over a three week period. I didn’t want to drag this out for too long, so I put in a huge push to get the last few chapters all ready to publish within a couple of days.
So what did I learn from this process? Was this approach a good one? From a Book Club standpoint, I think it went terrible, as I didn’t receive a single comment for any of the chapters. However, looking back over the stats of the site, many of them were in the top twenty posts on the site while I was publishing the chapter commentaries, so I know they were being read :).
From a learning and a pondering standpoint, I thought the process was great. I had the chance to spend a lot of time mulling over the ideas in the book, and rather than offer up a review or summary of each chapter (I'll direct the reader to the individual chapter synopses I've already provided). I liked some chapters more than others, and I’ll confess Ken’s chapter on Services was really dense and took me quite a bit of time to work through. Overall, I thought the book was very helpful and interesting. Some people are critical about the book as being too specific to Microsoft, but I found the approach to be interesting in an anthropological sense. I felt like the curtain was being pulled back and I could see what was actually happening inside of Microsoft. More to the point I could see where many of the tactics and approaches were similar to what I would do, and I could see processes and steps that were totally foreign to my approach and way of thinking. Was I happy that I had a chance to learn about these approaches and philosophy? Yes! Will I use every one of the methods described? Heck, no! Will I use even half of what is described in this book? Likely not, but even if I only walk away with half a dozen new ideas to consider, that’s a great percentage.
To be clear, I do not nor have I ever worked for Microsoft. I’ve worked with a Microsoft partner (Connectix) that was ultimately acquired by and became part of Microsoft, so I can say that I know several people that now work for Microsoft, but I don’t have any requirement to provide a glowing review. In some parts the book was excellent, and in some it was slow going and borderline not practical for what I do. Still, even with those criticisms out of the way, there’s a lot of solid information in this book. From understanding the role of test in the organization, to creating test cases, understanding the different product lines and methodologies used to test, to setting up and utilizing virtual machines, there’s lot of ground covered in this book. While some would criticize the lack of specific details, that’s not really the point of the book. The authors expect a far amount of testing knowledge coming in; i.e. this is not a book for beginners. Having said that, there’s still plenty of information; even a beginner could feel productive and find some ideas to help “bump up their game”.
This is an experiential book, and it’s based on the real world lessons of three Microsoft veterans. It’s heavy on Microsoft specific jargon, but as the book progresses, that jargon becomes less and less of an issue, and by the end of the book, even obscure acronyms don’t seem out of place, because you now have enough context to see what they might be called, and 8 out of 10 times, you’d be right. If there is going to be a criticism, it will of course have to be that the Microsoft centric focus is going to be less than an optimal transplant to another environment, especially those that are not running with Microsoft technologies such as .NET.
There is a fair amount of bureaucracy in the pages, but that is to be expected for a company with so many testers; a common language and methodology was chosen and agreed to so as to help facilitate communication. While Microsoft will never be confused with a 20 person agile shop, it certainly has groups that are nimble and approach development with more in common to the small agile shop than to the behemoth standards of yesteryear.
So having now spent so much time on this project, and others associated with my job transition, I can give some ideas as to what I would do differently were I to approach this BOOK CLUB method again.
1. Doing a book club review process and setting up a Practicum process for another book and running them at the same time proved to be a lot more than I could easily chew and digest, especially when I was trying to post more than one chapter a week. With hindsight, I would run these reviews at two different times. I don’t regret the approach for either, I just wish I’d done them at separate times so I could devote my whole attention to each one of them.
2. Each chapter took about three hours to read, ponder and write up. Sometimes that was relatively easy, and at other times, those three hours were just impossible to find! Were I to do it again, I would commit to a single chapter each week and write the review in parallel with each section and sub-section. That would stretch out a sixteen chapter book to nearly four months, but it would allow the reader to really spend some time with each chapter and cover it extensively. For the record, I completed all 16 chapters in 11 weeks, and that was with posting the last three chapters three days apart from each other. A more even spacing would have probably felt more natural and would have also given me some more time to absorb the material even more.
So this officially draws to a close the emphasis I’ve placed on How We Test Software at Microsoft. My thanks to Alan for giving me the book to review. I don’t think that this was quite what you had in mind when I said I’d review it, but I hope it was worth the time and the wait. Happy Testing!
This has been a long and sometimes challenging journey. When I set out to first do this project, I had a specific goal in mind, and I figured that I would apply this to my company and see what elements would be useful. Little did I know that, at the time I agreed to do this, that by the time I finished it, I would be operating in a totally different sphere, at a totally different company, in a totally different development environment.
First, a refresher on how this project came about. I was at the Pacific Northwest Software Quality Conference in October of 2010, and I met Alan Page as he was giving a talk on performing code reviews from the perspective of being a tester. At the end of the talk he was giving this book away to those who asked questions and participated. As he had the last copy he was giving out, I impetuously said “Hey Alan, if you give me a copy I promise to write a review about it!” He smiled and said “OK, you’re on” and with that, my copy of HWTSAM fell into my hands.
As I thought about how I wanted to do this, I remembered an approach that I thought was really interesting was done by Trent Hamm over at The Simple Dollar, which is a personal finance blog I’ve followed for several years. Trent is a voracious reader, and he would often do as many as two financial book reviews a week! Now, to be fair and provide some context and contrast, Trent is a professional blogger, The Simple Dollar is his primary job and source of livelihood, so he has the time to focus on that level of reading. TESTHEAD, while important to me and a tool I regularly use to help sharpen my craft as a tester, is not an income generator, nor is it meant to be. Likewise, it’s hard to work full time, raise a family and work on a blog to try to give it the attention it deserves. Two book reviews a week would be out of the question, and this book would be too dense to do as a one week review and a general list-out. Then I saw Trent’s Book Club entries and I said “A-ha! Now that’s an approach I can work with”. A chapter every few days shouldn’t be a problem!
Shortly after I started reviewing this book, my reality took a dramatic change. I was offered the chance to start a test group at a startup company, and I took it. Those who follow this blog know that's how I transitioned from Tracker Corp. to Sidereel and in the process, decided to move on from a company that I’d been with for six years. Needless to say, the challenge of managing that transition caused the updates to slow from twice a week to once a week, and then for a stretch I was only able to do two chapters over a three week period. I didn’t want to drag this out for too long, so I put in a huge push to get the last few chapters all ready to publish within a couple of days.
So what did I learn from this process? Was this approach a good one? From a Book Club standpoint, I think it went terrible, as I didn’t receive a single comment for any of the chapters. However, looking back over the stats of the site, many of them were in the top twenty posts on the site while I was publishing the chapter commentaries, so I know they were being read :).
From a learning and a pondering standpoint, I thought the process was great. I had the chance to spend a lot of time mulling over the ideas in the book, and rather than offer up a review or summary of each chapter (I'll direct the reader to the individual chapter synopses I've already provided). I liked some chapters more than others, and I’ll confess Ken’s chapter on Services was really dense and took me quite a bit of time to work through. Overall, I thought the book was very helpful and interesting. Some people are critical about the book as being too specific to Microsoft, but I found the approach to be interesting in an anthropological sense. I felt like the curtain was being pulled back and I could see what was actually happening inside of Microsoft. More to the point I could see where many of the tactics and approaches were similar to what I would do, and I could see processes and steps that were totally foreign to my approach and way of thinking. Was I happy that I had a chance to learn about these approaches and philosophy? Yes! Will I use every one of the methods described? Heck, no! Will I use even half of what is described in this book? Likely not, but even if I only walk away with half a dozen new ideas to consider, that’s a great percentage.
To be clear, I do not nor have I ever worked for Microsoft. I’ve worked with a Microsoft partner (Connectix) that was ultimately acquired by and became part of Microsoft, so I can say that I know several people that now work for Microsoft, but I don’t have any requirement to provide a glowing review. In some parts the book was excellent, and in some it was slow going and borderline not practical for what I do. Still, even with those criticisms out of the way, there’s a lot of solid information in this book. From understanding the role of test in the organization, to creating test cases, understanding the different product lines and methodologies used to test, to setting up and utilizing virtual machines, there’s lot of ground covered in this book. While some would criticize the lack of specific details, that’s not really the point of the book. The authors expect a far amount of testing knowledge coming in; i.e. this is not a book for beginners. Having said that, there’s still plenty of information; even a beginner could feel productive and find some ideas to help “bump up their game”.
This is an experiential book, and it’s based on the real world lessons of three Microsoft veterans. It’s heavy on Microsoft specific jargon, but as the book progresses, that jargon becomes less and less of an issue, and by the end of the book, even obscure acronyms don’t seem out of place, because you now have enough context to see what they might be called, and 8 out of 10 times, you’d be right. If there is going to be a criticism, it will of course have to be that the Microsoft centric focus is going to be less than an optimal transplant to another environment, especially those that are not running with Microsoft technologies such as .NET.
There is a fair amount of bureaucracy in the pages, but that is to be expected for a company with so many testers; a common language and methodology was chosen and agreed to so as to help facilitate communication. While Microsoft will never be confused with a 20 person agile shop, it certainly has groups that are nimble and approach development with more in common to the small agile shop than to the behemoth standards of yesteryear.
So having now spent so much time on this project, and others associated with my job transition, I can give some ideas as to what I would do differently were I to approach this BOOK CLUB method again.
1. Doing a book club review process and setting up a Practicum process for another book and running them at the same time proved to be a lot more than I could easily chew and digest, especially when I was trying to post more than one chapter a week. With hindsight, I would run these reviews at two different times. I don’t regret the approach for either, I just wish I’d done them at separate times so I could devote my whole attention to each one of them.
2. Each chapter took about three hours to read, ponder and write up. Sometimes that was relatively easy, and at other times, those three hours were just impossible to find! Were I to do it again, I would commit to a single chapter each week and write the review in parallel with each section and sub-section. That would stretch out a sixteen chapter book to nearly four months, but it would allow the reader to really spend some time with each chapter and cover it extensively. For the record, I completed all 16 chapters in 11 weeks, and that was with posting the last three chapters three days apart from each other. A more even spacing would have probably felt more natural and would have also given me some more time to absorb the material even more.
So this officially draws to a close the emphasis I’ve placed on How We Test Software at Microsoft. My thanks to Alan for giving me the book to review. I don’t think that this was quite what you had in mind when I said I’d review it, but I hope it was worth the time and the wait. Happy Testing!
Tuesday, February 8, 2011
BOOK CLUB: How We Test Software at Microsoft (16/16)
This is the second part of section 4 of How We Test Software at Microsoft. This is also the final chapter of the book. After three months of near weekly updates (some more often, some less often… sorry about that, this approach was a learning process for me, too :) ), this project has now come to an end. I will post a follow on to this final post that I will have a more conventional “total” review of the book and some comments on this BOOK CLUB process (will I do this again? What did I learn from doing this? What went well and what would I want to do differently in the future?), but first, let’s close out this endeavor with some thoughts from Alan regarding where testing may be heading and how Microsoft is trying to shape that future, both within their company culture and to help influence the broader culture outside of itself.
Chapter 16: Building the Future
Alan starts out this final chapter with the reminder that, by direct comparison, software testing is a new player in the culture compared to software development. Computer services offered to the public commercially began proper in the 1950s. In those days, software development and software testing were the same discipline; the developer did both. As the systems grew more complex and more lines of code were being written, and also fostered by developments in the manufacturing world, quality of the process became more of a focus and the idea that a separate, non-partisan entioty should be part of the process to review and inspect the systems. Thus, the role of finding bugs and doing “desk checks” of programs specifically as a development practice broke into two disciplines, where the software developer wrote the code and a tester checked it and make sure it was free of defects (or barring that, found what defects they could find).
Today, software testing is still primarily a process of going through software and verifying that it does what it claims to do, and keeping our eyes out for the issues that would be embarrassing or downright dangerous to a company’s future livelihood if a customer were to run across it. The programs written today are bigger, more complex and have more dependencies than ever. Think of the current IDE culture; so many tools are available at developers’ fingertips that they are able to write code without writing much of anything, it seems. Full featured utilities created with just twenty lines of code. Of course, those of us in testing know full well that that’s not the real story; those 20 lines of code contain references to references to objects and classes that we have to be very alert to if we want to ensure that we have done thorough testing.
As far as we can tell, the future is looking to get more wired, more connected, more blurring of the digital lines structuring our lives. The days of a discrete computer are ancient history. Nearly every digital device we come into contact with today now has ways and means to synchronize with other devices, either through cabled connections or through the ether. The need to test has never been greater, and the need for good testing is growing all the time. The question of this final chapter is simple, but by no means easy… where do we go from here?
The Need for Forward Thinking
In the beginning there was debugging, then we moved to verification and analysis. Going forward, the questions are not going to be so much “how do we verify that the system is working but rather, how do we prevent errors in the first place. A common metaphor that I use when I talk about areas where we have a stake in are two concentric circles. The inner one I call the sphere of control, the outer one I call the sphere of influence. When it comes to verification and analysis, that’s very much in the sphere of control for a tester, it’s something we can do directly and providce immediate value. When it comes to prevention, there are some things we can do to control it, but so much falls outside of our direct control, but it definitely falls into the sphere of influence. Alan recognizes this, and makes the point that the biggest gains going forward towards developing better quality will not be taking place in the verification and analysis sphere, but in the preventative sphere. The rub is, what we as testers can do to prevent bugs is a bit more limited. What we can do is provide great information that will help to influence the behaviors and practices of those who develop code, so that the preventative gains can be realized.
Thinking Forward by Moving Backward
I like this story, so it’s going in unedited :):
As the story goes, one day a villager was walking by the river that ran next to his village and saw a man drowning in the river. He swam into the river and brought the man to safety. Before he could get his breath, he saw another man drowning, so he yelled for help and went back into the river for another rescue mission. More and more drowning men appeared in the river, and more and more villagers were called upon to come help in the rescue efforts. In the midst of the chaos, one man began walking away along a trail up the river. One of the villagers called to him and asked, “Where are you going? We need your help.” He said, “I’m going to find out who is throwing all of these people into the river.”
Another phrase I like a lot comes from Stephen R. Covey’s book “The Seven Habits of Highly Effective People”. His habit #7 is called “Sharpening the Saw”. To picture why this would be relevant here, he uses the example of a guy trying to cut through a big log and he’s huffing and puffing, and he’s making progress, but it’s slow going. An observer notes that he’s doing a lot of work, and then helpfully asks “have you considered sharpening your saw?”, To which the man replies “Hey, I’m busy sawing here!” The point is, we get so focused on what we are doing right now, that we neglect to see what we can do, stop the process, repair or remedy the situation, and then go forward with renewed vigor and sharper tools.
How many software projects rely on the end of the road testing to find the problems that, if we believe the constant drum beat from executives and others who champion quality, would be way more easily found earlier in the process? Is it because we are so busy sawing that we never stop to sharpen the saw? Are we so busy saving drowning people we don’t bother to go up river and see why they are falling in?
All of us who are testers recall the oft mentioned figures of the increase of cost for each bug found later on in the process.
A bug introduced in the requirements phase that might cost $100 dollars to fix if found immediately will cost 10 times as much to fix if not discovered until the system test phase, or as much as 100 times as much if detected post-release. Bugs fixed close to when they are introduced are generally easier to fix. As bugs age in the system, the cost can increase as the developers have to reacquaint themselves with the code to fix the bug or as dependencies to the code in the area surrounding the bug introduce additional complexity and risk to the fix.
Striving for a Quality Culture
Alan points to the work of Joseph Juran and the fact that it is the culture of a place that will determine their approach to quality issues, and their resistance or lack thereof will likewise also have a cultural element to it as well. When I refer to culture here (and Alan, too) we are referring to the corporate culture, the management culture, the visions and values of a company. Those are very fluid as you go from company to company, but company mores and viewpoints can hold for a long time and become ingrained in the collective psyches of organizations. The danger is that, if the culture is not one that embraces quality as a first order factor of doing business, quality will take a back seat to other initiatives and priorities until it absolutely must be deal with (in some organizations, their lack of dealing often results in the closure of said company).
For many, the idea of a front-end quality investment sounds like a wonderful dream, but for many of us, that’s what it has proven to be… just a dream. How can we help make the step to earlier in the process? It needs to be a culture everyone in the organization embrace, one where prevention trumps expediency (or we could go with a phrase that Matt Heusser used on the TWiST podcast that I’ve grown to love… “If heroics are required, I can be a hero, but it’s going to cost you!” Seriously, I love this phrase, and I’ve actually used it a few times myself… because it’s 100% true. If an organization waits untiol the end of the process for last minute heroics, it will cost the organization, either in crunch time overtime of epic proportions, or in reactive fixes because something made itself out into the wild that shouldn’t have and, with some preventative steps, very likely could have been caught earlier in the life cycle.
Testing and Quality Assurance
“In the beginning of a malady it is easy to cure but difficult to detect, but
in the course of time, not having been either detected or treated in the beginning, it becomes
easy to detect, but difficult to cure.” –Niccolo Machiavelli
Alan, I just have to say “bless you” for bringing this up over and over in the book, and making sure it is part of the “parting shot” and summation. Early detection of a problem always trumps last minute heroics, especially when it comes to testing. Testing is the process of unearthing/uprooting problems before a customer can find them. No question, Microsoft has a lot of testers, and as I know quite a few of them and have worked with several of them personally over the years (as I said in the previous chapter, I worked at Connectix in 2001 and 2002, and a number of the software engineers and testers from that team are active SDE’s and SDET’s for Microsoft today). It’s not that they are not good at testing, it’s that even Microsoft still focuses on the wrong part of the equation…:
“YOU CAN’T TEST QUALITY INTO A PRODUCT!”
Testing and Quality Assurance are often treated as though they are the same thing. They are not. They are two different disciplines. When we test a product, it’s an after the fact situation. The product is made, we want to see if it will withstand the rigor of being run through its paces. Quality Assurance, by contrast is a process meant to be proactive and early in the life of a process or a product, to make sure the process delivers the intended result. It sounds like semantics, but it’s not, they are two very different processes with two different approaches. Of course, to assure quality, we use testing to make sure that the quality initiatives are being met, but using the terms interchangeably is both inaccurate and misleading (as well as confusing).
Who Owns Quality?
This is not a trick question, but the answers often vary. Does the test team own quality? No. They own the testing process. They own the “news feed” about the product. Others would say that the entire team owns quality, but do they really? If everyone owns something, does anyone really own anything?! Alan makes the point that saying the test team owns quality is putting the emphasis in the wrong place, and saying everyone owns quality is to de-emphasize it entirely. The fact is, the management team are the ones who own quality, because they are the one’s that make the ship decisions. Testing doesn’t have that power. The mental image of the “Guardian of the Gate” for testing is a bad one, as it makes it seem as though we are the ones that make the decision as to who shall pass and who will not, and we don’t. I’m a little more comfortable with the idea of the “last tackle on the field” because often the test team is the last group to see a feature before it goes out into the wild, but even then, there’s no guarantee we will catch it, or if we do stop it, that we can prevent them from going out into the field. Management owns that. The best metaphor, to me, is the idea of being a beat reporter. We get the story, we tell the story, as much as it that we know, and as much of it as we can learn. We tell our story, and then we leave it to the management team to decide if we have a shipping product or not.
In short, a culture of quality and a commitment to it must exist first before major changes and focus on quality will meet with success.
The Cost of Quality
The Cost of Quality is not the price of making a high quality product. It’s the price paid by a company when a poor quality product gets out. Everything from extra engineering cycles to provide a fix to lost opportunity because of bad press, to actual loss of revenue because a service isn’t working, all of these work into the cost of quality. Other examples of the price to pay when quality issues escape into the wild are:
The point that is being made is that, were none of these situations to have happened because testing and quality assurance were actually perfected to the point where no bugs slipped through (to dream… the impossible dream…), these expenses would not have caused the bottom line to take a hit. So perhaps the real cost of quality is what Alan calls the Cost of Poor Quality (COPQ).
Phillip Crosby says each business has three specific cost areas:
To put it bluntly, preventative work gets a lot of lip service, but rarely do they actually get implemented.
Failure costs? We pay them in spades, usually way more often than the other types (overtime, crunch time, the death march to release, etc.).
The takeaway from many testers (believe me, if we could impart no other message, this would be really high on my list of #1 takeaways…:
We don’t need heroics; we need to prevent the need for them.
A New Role for Test
One of the great ironies is that, when testers talk about the desire to move away from the focus on late in the game testing to earlier in the process prevention of bugs, an oft hear comment is, “come on, if we do that, what will the testers test?” well, let’s see… there’s the potential for looking at the human factors that influence how a product is actively used, there’s performance and tuning of systems, there’s system up time and reliability, there’s researching and examining different testing techniques to get deeper into the application… in short, there’s lots of things that testers can do, even if the end of the cycle heroic suicide missions are done away with entirely (many of us can only dream and wish of such a world). Many of the more interesting and compelling areas of software testing do not get explored in many companies because testers are in perpetual firefighting mode. For most of us, were we given the opportunity to get out of that situation and be able to explore more options, we would welcome it gladly!
Test Leadership
At the time HWTSAM was written, there were over 9,000 testers at Microsoft. Seriously, wrap your head around that if you can. How do you develop a discipline that large at a company the size of Microsoft, so that the tech of the trade keeps moving forward? You encourage leadership and provide a platform for that leadership to develop and flourish.
The Microsoft Test Leadership Team
Microsoft developed the Microsoft Test Leadership Team (MSTLT) to encourage the sharing of good practices and testing knowledge between various testing groups and between other testers.
The MSTLT’s mission is as follows:
The Microsoft Test Leadership Team vision
The mission of the Microsoft Test Leadership Team (MSTLT) is to create a cross–business group forum to support elevating and resolving common challenges and issues in the test discipline.
The MSTLT will drive education and best practice adoption back to the business group test teams that solve common challenges.
Where appropriate the MSTLT will distinguish and bless business group differences that require local best practice optimization or deviation.
The MSTLT has around 25 members including the most senior test managers, directors, general managers, and VPs, and the are spread throughout the company and represent all products Microsoft makes. Membership is based on level of seniority and approval of the TLT chair and product line vice president. Having these members involved helps to make sure that testing advocacy grows and that the state of the craft develops and flourishes with the support of the very people that champion that growth and development.
Test Leadership in Action
The MSTLT group meets every month to discuss and develop plans to help grow the career paths of a number of contributors, as well as addressing new trends and opportunities that can help testers become better and (yet again) improve the state of the craft overall within Microsoft.
Some examples on topics covered by MSTLT:
Updates on yearly initiatives: At least one MSTLT member is responsible for every MSTLT initiative and for presenting to the group on its progress at least four times throughout the year.
Reports from human resources: The MSTLT has a strong relationship with the corporate human resources department. This meeting provides an opportunity for HR to disseminate information to test leadership as well as take representative feedback from the MSTLT membership.
Other topics for leadership review: Changes in engineering mandates or in other corporate policies that affect engineering are presented to the leadership team before circulation to the full test population. With this background information available, MSTLT members can distribute the information to their respective divisions with accurate facts and proper context.
The Test Architect Group
Another group that has developed is the Test Architect Group which, contrary to its name, does not just include a bunch of Test Architects (though it started out that way) but also includes senior testers and those individuals who are working in the role of being a test architect, whether they have the official title or not.
So what was envisioned for being a Test Architect? Well, here’s how it was originally considered and implemented:
The primary goals for creating the Test Architect position are:
Some of the key things that Test Architects would focus on include:
The profile of a Test Architect:
Test Architects will be nominated by VPs and would remain in their current teams. They will be focused on solving key problems and issues facing the test teams across the board. The Test Architects will form a virtual team and meet regularly to collaborate with each other and other Microsoft groups including Research. Each Test Architect will be responsible for representing unique problems faced by their teams and own implementing and driving key initiatives within their organizations in addition to working on cross-group issues.
Test Excellence
Microsoft created the Engineering Excellence (EE) team in 2003. The group was created to help push ahead initiatives for tester training, to discover and share good practices in engineering across the company (some of you may notice that I didn’t say “best practices”. While Alan used the term ‘Best Pracices”, I personally don’t think there is such a thing. There’s some really great practices, but to say best means thjere’s no room for better practices to develop. It’s a pet peeve of mine, so I’m modifying the words a bit, but the sentiment and the idea is the same thing.
The mission of the Test Excellence comes down to Sharing, Helping, and Communicating.
Sharing
Sharing means focusing on the following areas:
Helping
One of the primary purposes of the test excellence team is to help champion quality improvements and learning for all testers. They help accomplish these objectives in the following ways:
Communicating
Having these initiatives is great, and supporting them takes a lot of energy and commitment, but without communicating to the rest of the organization, these initiatives would have limited impact. Some of the ways that the Test Excellence team helps foster communication among other groups are:
Keeping an Eye on the Future
Trying to anticipate the future of testing is a daunting task, but many trends make themselves visible often years in advance, and by trying to anticipate these needs and opportunities, the Test Excellence team can be positioned to help testers grow into and help develop these emerging skills and future opportunities.
Microsoft Director of Test Excellence
Each of the the authors of HWTSAM has held (or is the current holder in the case of Alan Page) the position of the Director of test Excellence.
It’s primary responsibility is to work towards developing opportunities and the infrastructure and practices needed to help advance the testing profession at Microsoft.
The following people have all held the Director of Test position:
The Leadership Triad
The Microsoft Test Leadership Team, Test Architect Group, and Test Excellence are three pillars of emphasis and focus on the development and advancement of the software testing discipline within Microsoft.
Innovating for the Future
The final page of the book deals with a goal for the future. Since so many of Alan, Ken and BJ’s words are already included, I think it’s only fair to let them have the last word :)...
When I think of software in the future, or when I see software depicted in a science fiction movie, two things always jump out at me. The first is that software will be everywhere. As prevalent as software is today, in the future, software will interact with nearly every aspect of our lives. The second thing that I see is that software just works. I can’t think of a single time when I watched a detective or scientist in the future use software to help them solve a case or a problem and the system didn’t work perfectly for them, and I most certainly have never seen the software they were using crash. That is my vision of software—software everywhere that just works.
Getting there, as you’ve realized by reading this far in the book, is a difficult process, and it’s more than we testers can do on our own. If we’re going to achieve this vision, we, as a software engineering industry, need to continue to challenge ourselves and innovate in the processes and tools we use to make software. It’s a challenge that I embrace and look forward to, and I hope all readers of this book will join me. If you have questions or comments for the authors of this book (or would like to report bugs) or would like to keep track of our continuing thoughts on any of the subjects in this book, please visit http://www.hwtsam.com. We would all love to hear what you have to say.
—Alan, Ken, and Bj
Chapter 16: Building the Future
Alan starts out this final chapter with the reminder that, by direct comparison, software testing is a new player in the culture compared to software development. Computer services offered to the public commercially began proper in the 1950s. In those days, software development and software testing were the same discipline; the developer did both. As the systems grew more complex and more lines of code were being written, and also fostered by developments in the manufacturing world, quality of the process became more of a focus and the idea that a separate, non-partisan entioty should be part of the process to review and inspect the systems. Thus, the role of finding bugs and doing “desk checks” of programs specifically as a development practice broke into two disciplines, where the software developer wrote the code and a tester checked it and make sure it was free of defects (or barring that, found what defects they could find).
Today, software testing is still primarily a process of going through software and verifying that it does what it claims to do, and keeping our eyes out for the issues that would be embarrassing or downright dangerous to a company’s future livelihood if a customer were to run across it. The programs written today are bigger, more complex and have more dependencies than ever. Think of the current IDE culture; so many tools are available at developers’ fingertips that they are able to write code without writing much of anything, it seems. Full featured utilities created with just twenty lines of code. Of course, those of us in testing know full well that that’s not the real story; those 20 lines of code contain references to references to objects and classes that we have to be very alert to if we want to ensure that we have done thorough testing.
As far as we can tell, the future is looking to get more wired, more connected, more blurring of the digital lines structuring our lives. The days of a discrete computer are ancient history. Nearly every digital device we come into contact with today now has ways and means to synchronize with other devices, either through cabled connections or through the ether. The need to test has never been greater, and the need for good testing is growing all the time. The question of this final chapter is simple, but by no means easy… where do we go from here?
The Need for Forward Thinking
In the beginning there was debugging, then we moved to verification and analysis. Going forward, the questions are not going to be so much “how do we verify that the system is working but rather, how do we prevent errors in the first place. A common metaphor that I use when I talk about areas where we have a stake in are two concentric circles. The inner one I call the sphere of control, the outer one I call the sphere of influence. When it comes to verification and analysis, that’s very much in the sphere of control for a tester, it’s something we can do directly and providce immediate value. When it comes to prevention, there are some things we can do to control it, but so much falls outside of our direct control, but it definitely falls into the sphere of influence. Alan recognizes this, and makes the point that the biggest gains going forward towards developing better quality will not be taking place in the verification and analysis sphere, but in the preventative sphere. The rub is, what we as testers can do to prevent bugs is a bit more limited. What we can do is provide great information that will help to influence the behaviors and practices of those who develop code, so that the preventative gains can be realized.
Thinking Forward by Moving Backward
I like this story, so it’s going in unedited :):
As the story goes, one day a villager was walking by the river that ran next to his village and saw a man drowning in the river. He swam into the river and brought the man to safety. Before he could get his breath, he saw another man drowning, so he yelled for help and went back into the river for another rescue mission. More and more drowning men appeared in the river, and more and more villagers were called upon to come help in the rescue efforts. In the midst of the chaos, one man began walking away along a trail up the river. One of the villagers called to him and asked, “Where are you going? We need your help.” He said, “I’m going to find out who is throwing all of these people into the river.”
Another phrase I like a lot comes from Stephen R. Covey’s book “The Seven Habits of Highly Effective People”. His habit #7 is called “Sharpening the Saw”. To picture why this would be relevant here, he uses the example of a guy trying to cut through a big log and he’s huffing and puffing, and he’s making progress, but it’s slow going. An observer notes that he’s doing a lot of work, and then helpfully asks “have you considered sharpening your saw?”, To which the man replies “Hey, I’m busy sawing here!” The point is, we get so focused on what we are doing right now, that we neglect to see what we can do, stop the process, repair or remedy the situation, and then go forward with renewed vigor and sharper tools.
How many software projects rely on the end of the road testing to find the problems that, if we believe the constant drum beat from executives and others who champion quality, would be way more easily found earlier in the process? Is it because we are so busy sawing that we never stop to sharpen the saw? Are we so busy saving drowning people we don’t bother to go up river and see why they are falling in?
All of us who are testers recall the oft mentioned figures of the increase of cost for each bug found later on in the process.
A bug introduced in the requirements phase that might cost $100 dollars to fix if found immediately will cost 10 times as much to fix if not discovered until the system test phase, or as much as 100 times as much if detected post-release. Bugs fixed close to when they are introduced are generally easier to fix. As bugs age in the system, the cost can increase as the developers have to reacquaint themselves with the code to fix the bug or as dependencies to the code in the area surrounding the bug introduce additional complexity and risk to the fix.
Striving for a Quality Culture
Alan points to the work of Joseph Juran and the fact that it is the culture of a place that will determine their approach to quality issues, and their resistance or lack thereof will likewise also have a cultural element to it as well. When I refer to culture here (and Alan, too) we are referring to the corporate culture, the management culture, the visions and values of a company. Those are very fluid as you go from company to company, but company mores and viewpoints can hold for a long time and become ingrained in the collective psyches of organizations. The danger is that, if the culture is not one that embraces quality as a first order factor of doing business, quality will take a back seat to other initiatives and priorities until it absolutely must be deal with (in some organizations, their lack of dealing often results in the closure of said company).
For many, the idea of a front-end quality investment sounds like a wonderful dream, but for many of us, that’s what it has proven to be… just a dream. How can we help make the step to earlier in the process? It needs to be a culture everyone in the organization embrace, one where prevention trumps expediency (or we could go with a phrase that Matt Heusser used on the TWiST podcast that I’ve grown to love… “If heroics are required, I can be a hero, but it’s going to cost you!” Seriously, I love this phrase, and I’ve actually used it a few times myself… because it’s 100% true. If an organization waits untiol the end of the process for last minute heroics, it will cost the organization, either in crunch time overtime of epic proportions, or in reactive fixes because something made itself out into the wild that shouldn’t have and, with some preventative steps, very likely could have been caught earlier in the life cycle.
Testing and Quality Assurance
“In the beginning of a malady it is easy to cure but difficult to detect, but
in the course of time, not having been either detected or treated in the beginning, it becomes
easy to detect, but difficult to cure.” –Niccolo Machiavelli
Alan, I just have to say “bless you” for bringing this up over and over in the book, and making sure it is part of the “parting shot” and summation. Early detection of a problem always trumps last minute heroics, especially when it comes to testing. Testing is the process of unearthing/uprooting problems before a customer can find them. No question, Microsoft has a lot of testers, and as I know quite a few of them and have worked with several of them personally over the years (as I said in the previous chapter, I worked at Connectix in 2001 and 2002, and a number of the software engineers and testers from that team are active SDE’s and SDET’s for Microsoft today). It’s not that they are not good at testing, it’s that even Microsoft still focuses on the wrong part of the equation…:
“YOU CAN’T TEST QUALITY INTO A PRODUCT!”
Testing and Quality Assurance are often treated as though they are the same thing. They are not. They are two different disciplines. When we test a product, it’s an after the fact situation. The product is made, we want to see if it will withstand the rigor of being run through its paces. Quality Assurance, by contrast is a process meant to be proactive and early in the life of a process or a product, to make sure the process delivers the intended result. It sounds like semantics, but it’s not, they are two very different processes with two different approaches. Of course, to assure quality, we use testing to make sure that the quality initiatives are being met, but using the terms interchangeably is both inaccurate and misleading (as well as confusing).
Who Owns Quality?
This is not a trick question, but the answers often vary. Does the test team own quality? No. They own the testing process. They own the “news feed” about the product. Others would say that the entire team owns quality, but do they really? If everyone owns something, does anyone really own anything?! Alan makes the point that saying the test team owns quality is putting the emphasis in the wrong place, and saying everyone owns quality is to de-emphasize it entirely. The fact is, the management team are the ones who own quality, because they are the one’s that make the ship decisions. Testing doesn’t have that power. The mental image of the “Guardian of the Gate” for testing is a bad one, as it makes it seem as though we are the ones that make the decision as to who shall pass and who will not, and we don’t. I’m a little more comfortable with the idea of the “last tackle on the field” because often the test team is the last group to see a feature before it goes out into the wild, but even then, there’s no guarantee we will catch it, or if we do stop it, that we can prevent them from going out into the field. Management owns that. The best metaphor, to me, is the idea of being a beat reporter. We get the story, we tell the story, as much as it that we know, and as much of it as we can learn. We tell our story, and then we leave it to the management team to decide if we have a shipping product or not.
In short, a culture of quality and a commitment to it must exist first before major changes and focus on quality will meet with success.
The Cost of Quality
The Cost of Quality is not the price of making a high quality product. It’s the price paid by a company when a poor quality product gets out. Everything from extra engineering cycles to provide a fix to lost opportunity because of bad press, to actual loss of revenue because a service isn’t working, all of these work into the cost of quality. Other examples of the price to pay when quality issues escape into the wild are:
- Rewriting or redesigning a component or feature
- Retesting as a result of test failure or code regression
- Rebuilding a tool used as part of the engineering process
- Reworking a service or process, such as a check-in system, build system, or review policy
The point that is being made is that, were none of these situations to have happened because testing and quality assurance were actually perfected to the point where no bugs slipped through (to dream… the impossible dream…), these expenses would not have caused the bottom line to take a hit. So perhaps the real cost of quality is what Alan calls the Cost of Poor Quality (COPQ).
Phillip Crosby says each business has three specific cost areas:
- Appraisal (salaries, equipment, software, etc.)
- Preventative (expenditures associated with implementing and maintaining preventative techniques)
- Failure (the cost of rework or “do-over”)
To put it bluntly, preventative work gets a lot of lip service, but rarely do they actually get implemented.
Failure costs? We pay them in spades, usually way more often than the other types (overtime, crunch time, the death march to release, etc.).
The takeaway from many testers (believe me, if we could impart no other message, this would be really high on my list of #1 takeaways…:
We don’t need heroics; we need to prevent the need for them.
A New Role for Test
One of the great ironies is that, when testers talk about the desire to move away from the focus on late in the game testing to earlier in the process prevention of bugs, an oft hear comment is, “come on, if we do that, what will the testers test?” well, let’s see… there’s the potential for looking at the human factors that influence how a product is actively used, there’s performance and tuning of systems, there’s system up time and reliability, there’s researching and examining different testing techniques to get deeper into the application… in short, there’s lots of things that testers can do, even if the end of the cycle heroic suicide missions are done away with entirely (many of us can only dream and wish of such a world). Many of the more interesting and compelling areas of software testing do not get explored in many companies because testers are in perpetual firefighting mode. For most of us, were we given the opportunity to get out of that situation and be able to explore more options, we would welcome it gladly!
Test Leadership
At the time HWTSAM was written, there were over 9,000 testers at Microsoft. Seriously, wrap your head around that if you can. How do you develop a discipline that large at a company the size of Microsoft, so that the tech of the trade keeps moving forward? You encourage leadership and provide a platform for that leadership to develop and flourish.
The Microsoft Test Leadership Team
Microsoft developed the Microsoft Test Leadership Team (MSTLT) to encourage the sharing of good practices and testing knowledge between various testing groups and between other testers.
The MSTLT’s mission is as follows:
The Microsoft Test Leadership Team vision
The mission of the Microsoft Test Leadership Team (MSTLT) is to create a cross–business group forum to support elevating and resolving common challenges and issues in the test discipline.
The MSTLT will drive education and best practice adoption back to the business group test teams that solve common challenges.
Where appropriate the MSTLT will distinguish and bless business group differences that require local best practice optimization or deviation.
The MSTLT has around 25 members including the most senior test managers, directors, general managers, and VPs, and the are spread throughout the company and represent all products Microsoft makes. Membership is based on level of seniority and approval of the TLT chair and product line vice president. Having these members involved helps to make sure that testing advocacy grows and that the state of the craft develops and flourishes with the support of the very people that champion that growth and development.
Test Leadership in Action
The MSTLT group meets every month to discuss and develop plans to help grow the career paths of a number of contributors, as well as addressing new trends and opportunities that can help testers become better and (yet again) improve the state of the craft overall within Microsoft.
Some examples on topics covered by MSTLT:
Updates on yearly initiatives: At least one MSTLT member is responsible for every MSTLT initiative and for presenting to the group on its progress at least four times throughout the year.
Reports from human resources: The MSTLT has a strong relationship with the corporate human resources department. This meeting provides an opportunity for HR to disseminate information to test leadership as well as take representative feedback from the MSTLT membership.
Other topics for leadership review: Changes in engineering mandates or in other corporate policies that affect engineering are presented to the leadership team before circulation to the full test population. With this background information available, MSTLT members can distribute the information to their respective divisions with accurate facts and proper context.
The Test Architect Group
Another group that has developed is the Test Architect Group which, contrary to its name, does not just include a bunch of Test Architects (though it started out that way) but also includes senior testers and those individuals who are working in the role of being a test architect, whether they have the official title or not.
So what was envisioned for being a Test Architect? Well, here’s how it was originally considered and implemented:
The primary goals for creating the Test Architect position are:
- To apply a critical mass of senior, individual contributors on difficult/global testing problems facing Windows development teams
- To create a technical career path for individual contributors in the test teams
Some of the key things that Test Architects would focus on include:
- Continue to evolve our development process by moving quality upstream
- Increase the throughput of our testing process through automation, smart practices, consolidation, and leadership
The profile of a Test Architect:
- Motivated to solve the most challenging problems faced by our testing teams
- Senior-level individual contributor
- Has a solid understanding of Microsoft testing practices and the product development process
- Ability to work both independently and cross group developing and deploying testing solutions.
Test Architects will be nominated by VPs and would remain in their current teams. They will be focused on solving key problems and issues facing the test teams across the board. The Test Architects will form a virtual team and meet regularly to collaborate with each other and other Microsoft groups including Research. Each Test Architect will be responsible for representing unique problems faced by their teams and own implementing and driving key initiatives within their organizations in addition to working on cross-group issues.
Test Excellence
Microsoft created the Engineering Excellence (EE) team in 2003. The group was created to help push ahead initiatives for tester training, to discover and share good practices in engineering across the company (some of you may notice that I didn’t say “best practices”. While Alan used the term ‘Best Pracices”, I personally don’t think there is such a thing. There’s some really great practices, but to say best means thjere’s no room for better practices to develop. It’s a pet peeve of mine, so I’m modifying the words a bit, but the sentiment and the idea is the same thing.
The mission of the Test Excellence comes down to Sharing, Helping, and Communicating.
Sharing
Sharing means focusing on the following areas:
- Practices The Test Excellence team identifies practices or approaches that have potential for use across different teams or divisions at Microsoft. The goal is not to make everyone work the same way, but to identify good work that is adoptable by others.
- Tools The approach with tools is similar to practices. For the most part, the core training provided by the Test Excellence team is tool-agnostic, that is, the training focuses on techniques and methods but doesn’t promote one tool over another.
- Experiences Microsoft teams work in numerous different ways—often isolated from those whose experiences they could potentially learn from. Test Excellence attempts to gather those experiences through case studies, presentations (“Test Talks”), and interviews, and then share those experiences with disparate teams.
Helping
One of the primary purposes of the test excellence team is to help champion quality improvements and learning for all testers. They help accomplish these objectives in the following ways:
- Facilitation Test Excellence team members often assist in facilitating executive briefings, product line strategy meetings, and team postmortem discussions. Their strategic insight and view from a position outside the product groups are sought out and valued.
- Answers Engineers at Microsoft expect the Test Excellence team to know about testing and aren’t afraid to ask them. In many cases, team members do know the answer, but when they don’t, their connections enable them to find answers quickly. Sometimes, team members refer to themselves as test therapists and meet individually with testers to discuss questions about career growth, management challenges, or work–life balance.
- Connections Probably the biggest value of Test Excellence is connections—their interaction with the TLT, TAG, Microsoft Research, and product line leadership ensures that they can reduce the degrees of separation between any engineers at Microsoft and help them solve their problems quickly and efficiently.
Communicating
Having these initiatives is great, and supporting them takes a lot of energy and commitment, but without communicating to the rest of the organization, these initiatives would have limited impact. Some of the ways that the Test Excellence team helps foster communication among other groups are:
- A monthly test newsletter for all testers at Microsoft includes information on upcoming events, status of MSTLT initiatives, and announcements relevant to the test discipline.
- University relationships are discussed, including reviews on test and engineering curriculum as well as general communications with department chairs and professors who teach quality and testing courses in their programs.
- The Microsoft Tester Center (http://www.msdn.com/testercenter)—much like this book—intends to provide an inside view into the testing practices and approaches used by Microsoft testers. This site, launched in late 2007, is growing quickly. Microsoft employees currently create most of the content, but industry testers provide a growing portion of the overall site content and are expected to become larger contributors in the future.
Keeping an Eye on the Future
Trying to anticipate the future of testing is a daunting task, but many trends make themselves visible often years in advance, and by trying to anticipate these needs and opportunities, the Test Excellence team can be positioned to help testers grow into and help develop these emerging skills and future opportunities.
Microsoft Director of Test Excellence
Each of the the authors of HWTSAM has held (or is the current holder in the case of Alan Page) the position of the Director of test Excellence.
It’s primary responsibility is to work towards developing opportunities and the infrastructure and practices needed to help advance the testing profession at Microsoft.
The following people have all held the Director of Test position:
- Dave Moore (Director of Development and Test), 1991–1994
- Roger Sherman (Director of Test), 1994–1997
- James Tierney (Director of Test), 1997–2000
- Barry Preppernau (Director of Test), 2000–2002
- William Rollison (Director of Test), 2002–2004
- Ken Johnston (Director of Test Excellence), 2004–2006
- James Rodrigues (Director of Test Excellence), 2006–2007
- Alan Page (Director of Test Excellence), 2007–present
The Leadership Triad
The Microsoft Test Leadership Team, Test Architect Group, and Test Excellence are three pillars of emphasis and focus on the development and advancement of the software testing discipline within Microsoft.
Innovating for the Future
The final page of the book deals with a goal for the future. Since so many of Alan, Ken and BJ’s words are already included, I think it’s only fair to let them have the last word :)...
When I think of software in the future, or when I see software depicted in a science fiction movie, two things always jump out at me. The first is that software will be everywhere. As prevalent as software is today, in the future, software will interact with nearly every aspect of our lives. The second thing that I see is that software just works. I can’t think of a single time when I watched a detective or scientist in the future use software to help them solve a case or a problem and the system didn’t work perfectly for them, and I most certainly have never seen the software they were using crash. That is my vision of software—software everywhere that just works.
Getting there, as you’ve realized by reading this far in the book, is a difficult process, and it’s more than we testers can do on our own. If we’re going to achieve this vision, we, as a software engineering industry, need to continue to challenge ourselves and innovate in the processes and tools we use to make software. It’s a challenge that I embrace and look forward to, and I hope all readers of this book will join me. If you have questions or comments for the authors of this book (or would like to report bugs) or would like to keep track of our continuing thoughts on any of the subjects in this book, please visit http://www.hwtsam.com. We would all love to hear what you have to say.
—Alan, Ken, and Bj
BOOK CLUB: How We Test Software at Microsoft (15/16)
This is the first part of Section 4 in “How We Test Software at Microsoft ”. We arein the home stretch now, just one more chapter to go after this one! This section deals with the idea of solving future testing problems today where possible, both in the testing technique sphere with failure analysis and code review and in the technology sphere with virtualization. Note, as in previous chapter reviews, Red Text means that the section in question is verbatim (or almost verbatim) as to what is printed in the actual book.
There is no question that the challenges for testing are going to grow rather than shrink in size over the coming years. The big question then is “what can the testers do to help surf the waves rather than get crushed by them?” Taking advantage of the tools and infrastructure options open to them will go a long way in helping to make it possible for the testers and developers to keep abreast of the furious pace of development, and utilizing tools like virtualization, code reviews, and failure analysis will help testers and developers quickly deploy environments, gain a better understanding of the code being created, and more quickly respond to the errors and failures that are the result of continuous software development.
Chapter 15: Solving Tomorrow’s Problems Today
This section starts out with Alan making the case for the fact that, while software testing is an expanding field, and one that is looking to get more respect over time (especially considering where it was a decade or two aqo), it still suffers from the fact that the paradigm of software testing is still one of reactive thinking.
Why do we hire testers? Because we proved that developers couldn’t find all of their own bugs, and that developers perhaps weren’t in the position to be the most effective at that process, anyway (just like I’m probably not the most effective person to debug and test my own test scripts or review my own test plans). For the state of the tester’s art to improve and flourish, part of the effort is going to require that we stop working exclusively in a reactive mode and work more towards finding pro-active solutions to the situations we are facing. Microsoft is no stranger to many os these issues and questions, and in this chapter, Alan goes through and describes some of the more forward looking methods that Microsoft is implementing to try to get a handle on the future of testing and the challenges it will present.
Automatic Failure Analysis
The scary thing when it comes to a company like Microsoft is that, with all of the applications, their respective platforms, flavors that run on the desktop, on the web and in the cloud, mixed with language support, a platform or application could have tens of thousands of test cases, and quite possibly more. With a few failed cases, analysis is relatively easy to focus on the specific issues raised during the tests. When dealing with test points that may number in the hundreds of thousands or even millions, looking at 15 of failures for such systems is still a terrifying proposition (if you have a million total test cases, and you pass 99% of them and only fail 1%, where do you begin to whittle down 10,000 test cases?
Automated testing allows for the ability to run thousands of test cases (perhaps even hundreds of thousands of cases) with very little human interaction. Testing analysis, on the other hand, still requires a human tough and observation. What do we do when that needed human touch requires us to look at tens of thousands of cases, the desire for automated methods to at least providing some first level test analysis is clear. Implementing such a question, of course is another thing entirely.
Overcoming Analysis Paralysis
Too many failures requiring too many test cycles to determine what is happening can become old, daunting and downright scary really quickly. How is a tester supposed to deal with all of them? As said above part of the focus needs to be on creating automation that will allow for first order analysis and get an idea what is happening with the errors, but even more important is the ability to get onto and focus on the errors before they become overwhelming. In this manner, efforts to examine and determine root cause issues will go a long way towards helping make sure that a backlog of errors doesn’t develop, along with the high blood pressure of the testers trying to make sense of it al.
To complicate matters, if a test fails multiple times, there’s no guarantee that the test failed each time for the same reason(s). There could be different factors coming into play that will skew the results or give false conclusions. Investigating the failures and determining what is really causing the problems is a skill and a black art that will need to be evolved and practiced at a greater rate for all testers going forward, because systems are not getting any simpler!
The Match Game
When we run across an error in software we are testing, depending on the culture, bug reports that are created tend to be very specific about the error, what happened to make it occur, and the environment details that were active when the error occurred. In many cases, the automated counterparts are very often nowhere near that detailed, if they mention much more than just the fact that the test failed. Since many automated tests are looking for match criteria to determine the PASS/FAIL of a test, getting log details is a mission critical aspect of any automation testing scheme.
Microsoft has created a failure database, which as its name implies, has information about each known system failure. Each test run compares to information in the database, and with that information, if there are matches, a bug is auto generated and references the issue (sounds cool for system issues that are variations on a theme).
Good Logging Practices
Writing to a log is a common occurrence for automated tests. However, with a little additional effort, that log file data can be a treasure trove of information about the health and well-being of a software product. To leverage the benefits of logging and help design tests that can stand the test of time, he recommends the following:
Logs should be terse on success and verbose on failure: In practice, “noisy” tests are often poorly written tests. Each piece of information recorded to the log should have some purpose in diagnosing an eventual test failure. When a test fails, the test should trace suffi cient information to diagnose the cause of that failure.
When a test fails, trace the successful operation(s) prior to the observed
Failure: Knowing the state of the last good operation helps diagnose where the failure path began.
Logs should trace product information.
Logs should trace information about the product, not information
about the test: It is still a good idea to embed trace statements in automated tests that can aid in debugging, but these statements do not belong in the test results log.
Trace sufficient and helpful failure context: Knowing more about how the failure occurred will assist in diagnosis of the underlying defect. Instead of logging:
Test Failed
Log something like:
Test Failed
Win32BoolAPI with arguments Arg1, Arg2, Arg3
returned 0, expected 1.
Or:
Test Failed
Win32BoolAPI with arguments Arg1, Arg2, Arg3
returned 0 and set the last error to 0x57,
expected 1 and 0x0
Avoid logging unnecessary information: Log files do not need to list every single action executed in the test and underlying application. Remember the first rule
above and save the verbose logging for the failure scenarios.
Each test point should record a result when a result has been verified or validated: Tests that aggregate failures often mask defects. If a test is in a fail and continue mode, it is important to know where each failure occurred to diagnose which subsequent failures were dependent and which were independent of the previous
failures.
Follow team standards on naming: Standards can help ensure consistency in reading the log files. All object, test, and procedure names should make sense and be non-degenerate (one name for one thing).
Machine Virtualization
One of the best decisions that Tracker Corp. (the company that I worked at from 2005 – 2011) was to put the majority of our test specific environments on a fairly beefy Windows 2008 Server machine, max out the RAM and disk space possibilities and load it to the gills with Virtual machines running Hyper-V. If I, a lone tester, found this setup to be a tremendous blessing, I could only imagine how welcome this type of technology would be to the testing professional in Microsoft, and I’ll likely say they would consider it a blessing for the same reasons that I did.
Ten years ago, I worked for Connectix, the company that first developed Virtual PC, which is in many ways the precursor to Hyper-V. I found the Virtual PC model to be very helpful with setting up test environments, tearing them down, and cloning them for various tests, as well as setting up virtual networks that allowed me to simulate client/server transactions as though they were part of an isolated network. Hyper-V virtual machines allow much the same, and have added considerable enhancement as well.
Virtualization Benefits
The benefits of Virtualization are many, not the least of which is the ability to create store, run, archive, and shuffle an almost limitless number of testing environments. With Windows 2008 Data-center Server, there is no limit to the number of virtual machines that can run at any given time (well, your RAM, CPU and disk space will certainly impose limits, but for a mid grade server machine, I frequently ran 10 to 15 virtual machines at a given time. The convenience of being able to manage and run up to 15 simultaneous machines is a wonderful convenience. More to the point, this machine being located in a server room and all access to the machines via RDP meant that the footprint for these machines was tiny (as in a single server machine, a dream come true for any tester that has had to maintain multiple physical machines either in his cube, office, or lab.
More to the point, it’s not just running all of these machines simultaneously, it’s also the ability to configure the machines to run on real and virtual networks, create domain controllers and private domains, Virtual Private Networks (VPN’s), and a variety of services running on different machines (web, database, file service, etc.). As mentioned in the last chapter, services are often run on multiple machines as separate components. These components can also run in virtual machines and be tested on a single host server with as many guests as needed to facilitate the services. Configuring and updating the machines can be done, both in real time as well as when the machines are offline.
Outside of the ability to create and deploy test machines rapidly by creating guest machines from a library of pre-configured disk images, the true beauty of Hyper-V is its extensive ability to snapshot virtual machine images, in many cases several at a time. While only one could be run at any given time, I often had machines that had several snapshots that allowed me to test iterative steps, and if any of the steps had a problem, I could restore back one, two, three steps or more, or all the way back to the beginning of the process. While I don’t necessarily recommend this as a regular practice for all virtual machines (relying on too many snapshots can greatly increase the odds of a failure in one of them, and when you lose or have a corrupted snapshot, it’s just gone. Still, even with that risk, the ability to have a safeguard between steps and a quick way to go back to a known state saved countless hours of configuration and set-up time over the past few years. I can only image how huge the savings would be for an organization the size of Microsoft.
Test Scenarios Not Recommended with Virtualization
While virtualization is a fantastic technology and a lifesaver in many situations, it does have some drawbacks. Specifically, most of the environments are optimized for virtualization, and thus are using virtual hardware. Applications that require access to real hardware peripherals in a real time mode will not be able to get this through virtualization. The video modes used are optimized for use via virtual machines, CAD and 3D gaming/simulation is not a good method for using virtual machines, as they will be grossly underpowered. Ultimately, virtual machines are bounded by the servers they are running on, and the number of machines will have to provide the sum of the total CPU, Ram and disk space of the host server. If the user maxes out, the virtual machines likewise max out, there’s nowhere to go but down.
Code Reviews and Inspections
Alan makes the point that even the manuscript for each chapter of HWTSAM goes through multiple hands. The reason is that, no matter how many times he reviews his own work, someone else will see things in a different light or notice something he’s missed, simply because Alan knows the intent of what he’s wanted to write, and thus may totally skim over something that is obvious to anyone else (I suffer through this myopia myself with just about every blog post that I write).
The code review process does the same thing for developers and the code that they write. In the same way that handing off a manuscript to others can help flush out punctuation and grammatical issues, code reviews can flush out bugs in code before it is even compiled.
Microsoft puts a lot of emphasis on code review and having testers get involved in the process (note: Alan’s talk at the 2010 Pacific Northwest Software Quality Conference was specifically about software testers taking part in the code review process.
Types of Code Reviews
Code reviews can range anywhere from informal quick checks to in depth, very specific review sessions with a team of engineers. Both share many attributes, and while one is less rigorous than the other, they still aspire to the same thing, to use criteria to determine if code is accurate, solid and well constructed.
Formal Reviews
Fagan inspections (named after the inventor of the process, Michael Fagan) are the most formal code reviews performed. A group of people are selected to review the code, with very specific roles and processes that need to be adhered to. Formal meetings with roles assigned to each participant are hallmarks of the method.
Those participating are expected to have already read and pre-reviewed the code. As you might guess, these inspections take a lot of time and manpower to accomplish, but there are very effective when harnessed correctly. While this method is effective, the intensely time consuming aspect of it is actually part of the reason why the technique is not widely used at Microsoft.
Informal Reviews
The big challenge with informal reviews are that, while they are indeed faster, they also are not as comprehensive and may not hit many of the deeply entrenched bugs the way that a more formal process would. Alan showcases a number of methods utilized, ranging from peer programming sessions with just review in mind, to email round-robins where code is inspected on and commented on, to over the shoulder checks. All of them have their pluses and minuses, and in most cases, the trade-off for thorough process is time spent to actually do it.
Checklists
Checklists have the ability to focus reviewers on areas that might be considered the most important, or otherwise areas that need to be covered so that a thorough job is done. An example checklist provided by Alan follows below:
- Functionality Check (Correctness)
- Testability
- Check Errors and Handle Errors Correctly
- Resources Management
- Thread Safe (Sync, Reentry, Timing)
- Simplicity/Maintainability
- Security (INT Overflow, Buffer Overruns, Type Mismatches)
- Run-Time Performance
- Input Validation
Taking Action
In addition to the number of issues found by activity, it can be beneficial to know what kinds of issues code reviews are finding. Here is an example of some rework options found during various Microsoft code reviews, as well as some steps to help find these issues prior to code review.
Duplicate Code: For example, re-implementing code that is available in a common library
Educate development team on available libraries
and their use: hold weekly discussions or presentations demonstrating capabilities of libraries.
Design issue: for example, the design of the implementation is suboptimal or does not solve the coding problem as efficiently as necessary
Functional issue: for example, the implementation contains a bug or is missing part of the functionality (omission error)
Spelling errors: Implement spell checking in the integrated development environment (IDE).
Time Is on My Side
While code reviews are important, it’s also important to consider the time impact of conducting them. Guessing how much time it takes is usually wildly inaccurate, so using mechanisms that actually help to literally measure how long it takes to review code could prove to be very beneficial. The duration of time that it takes to review code can be very subjective, depending on the complexity of the systems being reviewed, but the following questions can help focus the time requirements:
- What percentage of our time did we spend on code review versus code implementation?
- How much time do we spend reviewing per thousand lines of code (KLOC)?
- What percentage of our time was spent on code reviews this release versus the last release?
- What is the ratio of time spent to errors discovered (issues per review hour)?
- What is the ratio of issues per review hour to bugs found by test or customer?
More Review Collateral
Alan makes the case that there is a lot of communication that goes on in code reviews. Rather than verbalize the majority of it, it would be helpful to capture all of that feedback. In many cases, unless that feedback is captured some way, it’s lost in the ether. Formal code review tools can help with this process, incorporating email comments, or even just marking up the code and being verbose with commentary can prove to be very helpful.
Two Faces of Review
More than just providing the opportunity to review and refactor code, reviews also give everyone on the team a better chance to learn the functionality and underlying calls throughout the system in a comprehensive manner. In short, the biggest benefit of code reviews beyond finding bugs is in educating the development team and test team as to what the code actually contains
Tools, Tools, Everywhere
There’s a double edge sword in having so many software developers and SDET’s at Microsoft.
The good news, there’s a huge number of tools to help with situations developers and testers might face (lots of options and choices for specific methods and implementations.
The bad news, there’s a huge number of tools to help with situations developers and testers might face (potentially too many choices makes it hard to determine which tool would be appropriate for specific methods and implementations. To this end, there are dozens of automation frameworks out in the wild at Microsoft, and each has a different purpose. Knowing which tool to use and when is not a chance encounter, users have to know what they are doing and when.
Reduce, Reuse, Recycle
Alan makes the point that Microsoft Office, before it became Office, was a collection of separate tools (I remember distinctly the days when we would buy Access, Excel, and Power Point and Word as separate applications to install on systems. When the decision was made to bundle these applications together, along with Outlook and create the Microsoft Office suite, it was discovered that many functions were repeated or operated very similarly throughout the respective applications. Rather than maintain multiple versions of code, the decision was made to create a dedicated library of functions that all of the tools would utilize together. By doing this, it simplified the coupling of applications to one another and made it possible to experience a similar look and feel and to conduct transactions in a similar manner between tools.
Additionally, the greater benefit is the fact that, for testers, there is less code to have to wade through and fewer specific test cases required; the shared library allowed for a more efficient use of code modules.
This system works great when code is being shared between product lines. The challenge comes when trying to do the same thing between functional groups looking to create test harnesses and framework. Without an up-front decision to consolidate and work on frameworks together, there isn’t much benefit for functional groups to consult with one another as to how they are each doing their testing and what code they are writing to do that.
What’s the Problem?
Generally speaking, if separate groups are working on different goals, there may be no benefit at all to making sure that automated tests and frameworks are standardized. In other context’s though, it may prove to be very helpful, in that it will allow for organizations to be more efficient and make better use of the respective tools. What’s more, development of tests may well prove to go faster because all testers are “on the same page” and understand both the benefits and limitations of specific systems.
Subscribe to:
Posts (Atom)