This is the fourth installment in the TESTHEAD BOOK CLUB coverage of “How We Test Software at Microsoft”. With this chapter, we move into what Alan referred to in the introduction as the “About Testing” phase of the book, where he and the other authors talk about Testing skills in general. Chapter 4 covers Test Case Design.
Chapter 4: A Practical Approach to Test Case Design
Alan makes the case that Microsoft does not develop products with a short shelf life, generally speaking. While some applications and Operating systems have a bigger impact than others, it’s generally a given that an Operating System or an Office Suite may well be in use for 10 years or more, so the tests that are developed need to be robust enough to be in active use for at least a decade, if not longer. As a user of certain applications, I happen to know this is the truth. While I have a laptop that is running with the latest and greatest applications (Windows 7, Office 2010, etc.) my work desktop is still running Windows XP and Office 2007. Various virtual machines that I maintain and use for testing have Operating systems as far back as Windows 2000, Office 2000, SQL Server 2000, etc., because we have customers that still actively use these technologies. So I get where Alan’s coming from here.
Test automation is especially helpful when you have nailed down the long term test cases and have a good feeling they will not be changing (or the likelihood of change is very small). Manual testing is also required, but making the effort to design test cases that are robust, intelligent and focused is essential regardless of the method used to execute them.
Neat aside: Microsoft Office 2007 had more than 1 million test cases written for it. Divide that by the three thousand “feature crews” (see chapter three summary for description of a Feature Crew), and see that that gives an average of around three hundred test cases for each feature crew. That’s entirely believable, and may even be seen as “light” by some standards, but when combined, wow!
Practicing Good Software Design and Test Design
Software design requires both planning and problem solving. The user experience and the end goal of the work that needs to be done by the stakeholders in question guide those decisions. It’s not always to do what Alan describes as the “big design up front” (BDUF). When using Agile, ideally, the code itself is considered the design, but it still requires planning to be designed well.
Designing tests and designing the software have many parallels. Planning and problem solving are essential. The tester needs to know what to test, when to test, and what the risks are for what doesn’t get tested. Customer needs and expectations must be addressed. Good test design often stems from a thorough review of the software design.
Using Test Patterns
Alan references work by Robert Binder and Brian Marick to describe various Test Patterns that mirror or directly associate with patterns used in Software development. The idea behind test patterns is that they are meant to provide guidance for testers when they design tests. Some patterns are used for structured testing and heuristics, and other patterns go into different areas entirely. Test patterns allow the tester to communicate the goals of a strategy in a way that others can understand (development, support, sales, etc.).
Alan includes a template based on Robert Binder’s test design ideas as an example to help see the attributes of a test pattern.
• Name: Give a memorable name—something you can refer to in conversation.
• Problem: Give a one-sentence description of the problem that this pattern solves.
• Analysis: Describe the problem area (or provide a one-paragraph description of the problem area) and answer the question of how this technique is better than simply poking around.
• Design: Explain how this pattern is executed (how does the pattern turn from design into test cases?).
• Oracle: Explain the expected results (can be included in the Design section).
• Examples: List examples of how this pattern finds bugs.
• Pitfalls or Limitations: Explain under what circumstances and in which situations this pattern should be avoided.
• Related Patterns: List any related patterns (if known).
The key benefit to using this approach is that there is flexibility to create different patterns but still provide enough to help testers and developers and other interested parties understand what is happening and speak to it in a way all can understand.
Estimating Test Time
Sometimes, this is the bane of my existence. Effectively estimating test time can range anywhere from methodical and times specifics from past events (rare, but it can happen given enough time and iterations) to what I frequently refer to as “SWAGing” (SWAG stands for “silly wild-assed guess”). I wish I was kidding about that, but often when we deal with new technologies, SWAGing is often the best we can do.
Alan argues that simply adding a few weeks of "buffer" or "stabilization" time at the end of a product cycle doesn’t do the trick. In fact, this often causes more problems.
So how long should a test take? Alan mentions a rule of thumb to be copying the development time. Two man–weeks to develop a feature? Expect it to take the same two man-weeks to write the automation tests and define the manual tests… and execute them. Even this can be wrong, since there are a lot of factors to be considered when testing; the complexity of the feature could require ten times the time it took to code it because of the numerous variations that have to be applied to determine if it’s “road worthy”. Some things to consider:
• Historical data: How long have previous projects similar to this one taken to test?
• Complexity: The more bells and whistles, the more possible avenues to test, and the longer it will take to perform many of them (note I didn’t say all of them; even simple programs could have millions of possible test cases, and testing them all can range from impractical to impossible)
• Business goals: Web widget for a game, or control software for a pacemaker? End use and expectations can have a huge effect on the time needed to test a feature.
• Conformance/Compliance: is there a regulatory body we need to answer to? What regulations do we have to make sure we have covered?
Starting with Testing
Have you ever said to yourself “wouldn’t it be great if the test team could be involved from the very beginning”? Personally, I’ve said this many times, and in some aspects of Agile development, I’ve actually had my wish fulfilled. More times than not, though, testing comes later in the process, and I’ve lamented that my voice couldn’t be heard earlier on in the development and design process (“hey, let’s make testability hooks so that we can look at A, B and C”). Getting testers involved with reviewing the requirements or functional specs can be a good start. Barring the availability of up-front design specs (come on, raise your hand if you’ve ever worked on a project where you got a snicker or an eye roll when you’ve asked for a formal specification to review… yeah, I thought so :) ), The next best thing we can do is ask investigative questions. Put on that Mike Wallace hat (Mike Wallace being the investigative journalist made famous in the states for the TV program “60 Minutes”, my apologies to those elsewhere in the world that have no idea who I am referring to). How does the feature work? How does it handle data? Errors? File I/O? Getting these answers can go a long way in helping the tester develop a strategy, even when there is little or no design documentation.
Alan makes a segue here with a story about testing with the code debugger. While this may be a head scratcher for those testers that are not regularly looking at source code, he’s correct in that getting in and doing code review and walking through the code will help fill in the understanding necessary to see which paths testing may exercise and where to add test cases to make sure more coverage is provided.
Have a Test Strategy
Having a testing strategy is helpful to keep the team insync and know which areas to cover first and which areas to leave until later (or perhaps not deal with at all for the time being). Rick analysis is important here, and determining which areas are mission critical and which areas are lower priority can help make sure that you apply your testing time effectively. Part of the strategy should include training the test team in areas that they many need additional understanding (paired sessions, peer workshops, meet-ups or conferences can help with this). Key takeaway is that part of the testing strategy needs to be making smarter and better equipped testers.
Alan suggests using the following attributes when devising a testing strategy:
• Introduction: Create an overview and describe how the strategy will be used
• Specification Requirements: What are the documentation pans? What are the expectations from these documented plans?
• Key Scenarios: What customer scenarios will drive testing?
• Testing Methodology: Code Coverage? Test Automation? Exploratory Testing? Case Management? Bug Tracking system?
• Test Deliverables: What is expected of the test team? How do we report our results? Who will see them? When do we know we are done?
• Training: how do we fill in knowledge gaps? How do we help transfer skills to other testers?
Thinking About Testability
Testability is the ability to have software actually be in a state where testing is possible and perhaps optimized for testing. I’m a big fan of developers that look to create test hooks to help make the job of testing easier or more productive. I cannot count how many times I’ve asked “so how will we be able to test this?” over the years. Sometimes, I’ve been met with “well, that’s your problem” but often I’ve had developers say “Oh, it’s no problem, you can use our API” or “I’ve created these switch options you can set so you can see what is going on in the background” or “setting this option will create a log file to show you everything that is happening”. When in doubt, ask, preferably early on in the process. Testing hooks become harder to create as the project moves on and gets closer to ship date. Not impossible, but it adds time that could be used for feature development and hardening. Better to ask earliy in the game :).
Alan uses the acronym SOCK to describe a model for helping make software more testable in the development stage:
• Simple: Simple components and applications are easier (and less expensive) to test.
• Observable: Visibility of internal structures and data allows tests to determine accurately whether a test passes or fails.
• Control: If an application has thresholds, the ability to set and reset those thresholds makes testing easier.
• Knowledge: By referring to documentation (specifications, Help files, and so forth), the tester can ensure that results are correct.
How do you test hundreds of modems?
We needed to test modem dial-up server scalability in Microsoft Windows NT Remote Access Server (RAS) with limited hardware resources. We had the money and lab infrastructure only for tens of modems, but we had a testability issue in that we needed to test with hundreds of modems to simulate real customer deployments accurately. The team came up with the idea to simulate a real modem in software and have it connect over Ethernet; we called it RASETHER. This test tool turned out to be a great idea because it was the first time anyone had created a private network within a network. Today, this technology is known as a virtual private network, or a VPN. What started as a scalability test tool for the Windows NT modem server became a huge commercial success and something we use every time we "tunnel" into the corporate network.
—David Catlett, Test Architect
Test Design Specifications
Designing tests well is arguably as important as designing the software well. Using a test design specification (TDS) can be effective for both manual and automated tests, because it describes both the approach and intent of the testing process. This also has the added benefits in that it allows those who take on the responsibility of maintaining the product down the line to easily interpret the intent of the testing and what the software’s expected parameters are meant to be.
These are items that might be found in a typical TDS:
• Overview/goals/purpose
• Strategy
• Functionality testing
• Component testing
• Integration/system testing
• Interoperability testing
• Compliance/conformance testing
• Internationalization and globalization
• Performance testing
• Security testing
• Setup/deployment testing
• Dependencies
• Metrics
Testing the Good and the Bad
As testers we are often called on to do devious and not very pleasant things to software. AS is often quoted, while a developer will try very hard to make sure the software does the ten things it needs to do right, a tester is tasked with finding the one (or more ways) that we can cause the software to fail.
We perform verification tests (tests that verify functionality using expected inputs) and falsification tests (tests that use unexpected data to see whether the program handles that data appropriately). James Whittaker’s #1 attack is to find out every error message associated with a program/function/feature, and force the program to make that error appear at least once (for more on Whittaker attacks, check out his book “How to Break Software”).
In many ways, the falsification tests are more important than the tests that prove the product “works”. Of course, every one of us has heard the refrain from a developer or development manager “yeah, but what customer would ever do that?” After almost 20 years of testing, I can confirm that, if a customer can do something, they most likely will, whether they intend to or not!
Very often, we test the “happy path” of an application because as a customer, we want to cover the things we would expect to work. This is not doing the nefarious evil tests that testers are famous for, but actually focusing on tasks that the average user would do. Very often, we find that errors occur even here, such as a button that should bring up a report, but clicking the button does nothing. Adam makes the point that “the Happy Path should always pass” and generally that’s true, but just because the Happy Path has been well covered ground, all it takes is one change to throw a few rocks into the Happy Path.
Other Factors to Consider in Test Case Design
Simply put, it is impossible to test everything, even in simple programs. Large complex applications are even more challenging. Adding people, tools, time, etc. will get you a little closer, but complete and total test coverage is an asymptote; you may get close, but you’ll never really reach it (and in many cases, you won’t get anywhere near close to compete and total test coverage). This is where considering things like the project scope, the mission of testing, risk analysis and the testers skills come into play.
Black Box, White Box, and Gray Box
Alan goes on to describe a process where he defines Black box, white box and gray box testing. The key points are that black box testing focuses on the inputs and expected outputs (good and bad) of a system without knowledge (or concern) for the underlying code. White box testing focuses on knowing the specific code paths and constructing cases around intimate knowledge of the code structures. In actuality, effective testing blends both, and this is where Alan’s and Microsoft’s definition of Gray Box testing comes into play. Tests are designed from the customers perspective first (black box), but then reference the code to make sure that the tests actually cover the areas of the functions in question (white box).
Exploratory Testing at Microsoft
Alan describes Exploratory testing as a generally manual approach to testing where every step of testing influences the subsequent step. During exploratory testing, testers use their knowledge of the application and what they know of testing methods to find bugs quickly. Teams often schedule "bug bashes" throughout the product cycle, where testers take part of a day to test their product using an exploratory approach (we do the same thing at my company but we call them “Testapalooza’s”. basically the same thing and approach). The goals are to effectively go “off script” and see what we can discover.
Microsoft uses pair testing for many of these efforts. The idea is that two testers team up for an exploratory testing session. One tester sits at the keyboard and exercises the feature or application while the other tester stands behind or sits next to the first tester and helps to guide the testing. Then the testers switch roles. Alan reports that, in a single 8-hour session, 15 pairs of testers found 166 bugs, including 40 classified as severity 1 (bugs that must be fixed as soon as possible). In this case, it shows that two heads can often be better than one (and can often be a lot more fun, too).
Chapter 5 will be covered on Thursday.
Tuesday, November 30, 2010
Monday, November 29, 2010
Inspiration From Interesting Places
Today's post is a little bit different. Sometimes all it takes is a stop at a soda shop to make one see things in a whole new light. Of course, if that soda shop happens to be Galco's in the Highland Park neighborhood of Los Angeles, you may well get lots more inspiration than you planned for!
A little background... my son has been fascinated with Galco's ever since he found a YouTube video made with its owner, John Nese. The video is a 12 minute exchange and talks about John's fascination with various sodas and his desire to make a one stop shop for every rare soda imaginable (currently, he carries 500 brands). We had a chance to visit Galco's when we went down to spend time with my family in Burbank, CA.
One of the most interesting comments John made, and what he said helped develop Galco's from a standard grocery store into the Soda Pop Supermarket it is today, was when he flatly refused a Pepsi distributor who was offering him a price option to carry Pepsi cola in cans. John decided not to carry it because his customers could get a better deal for those at the Ralph's market down the street. The Pepsi cola distributor gave John a hard time saying "hey, Pepsi is a demand item, and your customers will be put out if you don't carry Pepsi. With a bit of a bickering back and forth over a couple of weeks, John made the point of "thank you for reminding me that this is my store and I own my shelf space,and I will do what I want to so that I can provide the best value to my customers". This was the step that prompted John to buy various rare and small run sodas from a variety of small companies. At first people asked him why he was buying all of these sodas that nobody would buy... but when he got to 250 different brands, people instead started asking "where are you finding them?"
Key takeaway: Don't focus on what everyone else is doing. Instead, focus on what will make you different than the rest. Given time, people will look to that difference and want more.
Galco's has a variety of flavors that would absolutely blow your mind, and admittedly, some flavors are better than others (entirely a subjective view, because one persons favorite or idea of "better" will likely not be another persons). The choices are absolutely overwhelming, but that's where having a "Domain expert" like John come in can be a great help. John says he has no agenda other than offering unique flavors and choices, but based on what you are interested in and/or consider a favorite, he can certainly offer suggestions. When I was talking with him, I mentioned that I was a huge fan of Blenheim #3. For those not familiar, Blenheim is a craft ginger ale made in South Carolina and is not exactly mass marketed. Very few places in the Bay Area carry it, so whenever I run across it, I consider it a rare and wonderful treat. Suffice it to say, Galco's carries Blenheim #3, but John looked at me with a sly grin when I commented that it was the hottest Ginger Ale there is, and he said "oh, I think I can do you one better!" He then showed me "Jamaica's Finest Ginger Beer Hot! Hot! Hot!" (I kid you not, that is the name). John pointed out that this specific Ginger Beer gets its exceptional heat because it is actually made with raw Ginger Oil, instead of ginger extract. Needless to say, there was indeed a little more bite to the Jamaica's Finest as compared to Blenheim #3.
Key takeaway: an expert will help steer you towards what you are really after, if you articulate your goal well and actually listen.
One of the more interesting groups of sodas that Galco's carries is from a Romanian bottler that specializes in "flower sodas". Hang with me here :). Some of you may be familiar with the idea of Rose Water being added to beverages, but this company from Romania specializes in pressing various sodas from roses, elder flowers (Edelweiss), and even cucumber. Who in their right mind would drink a soda made of cucumber?! My thoughts exactly, but after I had a taste of it, I was amazed. IT was light, tasty, and while it definitely gave me a hint of a garden salad in a soda bottle, it was surprisingly good. We bought a bottle to bring home, and after I finished it, I wanted to kick myself for not buying more (John has bought the entire run of this companies flower sodas, so the only place to get these in the states is through John and Galco's. The bottler in question actually couldn't find a store who would give the cucumber soda a chance, but John said "why not" and has been glad that he did.
Key takeaway: be willing to experiment, and try something that on the surface seems crazy. Given a choice, people will gravitate towards the idea if they can try it for themselves.
It was so much fun watching my kids, my wife, and yes, me as well go through the store and just get totally lost in the options available. In addition to my beloved Blenheim #3 and the discovery of Jamaica's Finest, John also introduced me to an amazing soda that he said is probably the rarest one he carries called Ouzon. It's a soda pop that tastes like Ouzo, but without the alcohol. Seriously, this one floored me. It's really light, and has a light flavor of anise that if the main ingredient of Ouzo. It's really hard to describe, but it was something so surprising and, yes, enjoyable, that I will definitely keep my eye out for it (that is, if anyone other than John and Galco's actually carries it).
Key takeaway: It helps to have some rare skills and rare talents up your sleeve. They may not come into play every day, but it's a good bet that when you meet those people that are looking for what you have, they will be delighted that you have it, and will see that talent out again.
Yep, inspiration can come in some fascinating places, even a soda pop shop... but man, *what* a Soda Pop Shop :)!!!
A little background... my son has been fascinated with Galco's ever since he found a YouTube video made with its owner, John Nese. The video is a 12 minute exchange and talks about John's fascination with various sodas and his desire to make a one stop shop for every rare soda imaginable (currently, he carries 500 brands). We had a chance to visit Galco's when we went down to spend time with my family in Burbank, CA.
One of the most interesting comments John made, and what he said helped develop Galco's from a standard grocery store into the Soda Pop Supermarket it is today, was when he flatly refused a Pepsi distributor who was offering him a price option to carry Pepsi cola in cans. John decided not to carry it because his customers could get a better deal for those at the Ralph's market down the street. The Pepsi cola distributor gave John a hard time saying "hey, Pepsi is a demand item, and your customers will be put out if you don't carry Pepsi. With a bit of a bickering back and forth over a couple of weeks, John made the point of "thank you for reminding me that this is my store and I own my shelf space,and I will do what I want to so that I can provide the best value to my customers". This was the step that prompted John to buy various rare and small run sodas from a variety of small companies. At first people asked him why he was buying all of these sodas that nobody would buy... but when he got to 250 different brands, people instead started asking "where are you finding them?"
Key takeaway: Don't focus on what everyone else is doing. Instead, focus on what will make you different than the rest. Given time, people will look to that difference and want more.
Galco's has a variety of flavors that would absolutely blow your mind, and admittedly, some flavors are better than others (entirely a subjective view, because one persons favorite or idea of "better" will likely not be another persons). The choices are absolutely overwhelming, but that's where having a "Domain expert" like John come in can be a great help. John says he has no agenda other than offering unique flavors and choices, but based on what you are interested in and/or consider a favorite, he can certainly offer suggestions. When I was talking with him, I mentioned that I was a huge fan of Blenheim #3. For those not familiar, Blenheim is a craft ginger ale made in South Carolina and is not exactly mass marketed. Very few places in the Bay Area carry it, so whenever I run across it, I consider it a rare and wonderful treat. Suffice it to say, Galco's carries Blenheim #3, but John looked at me with a sly grin when I commented that it was the hottest Ginger Ale there is, and he said "oh, I think I can do you one better!" He then showed me "Jamaica's Finest Ginger Beer Hot! Hot! Hot!" (I kid you not, that is the name). John pointed out that this specific Ginger Beer gets its exceptional heat because it is actually made with raw Ginger Oil, instead of ginger extract. Needless to say, there was indeed a little more bite to the Jamaica's Finest as compared to Blenheim #3.
Key takeaway: an expert will help steer you towards what you are really after, if you articulate your goal well and actually listen.
One of the more interesting groups of sodas that Galco's carries is from a Romanian bottler that specializes in "flower sodas". Hang with me here :). Some of you may be familiar with the idea of Rose Water being added to beverages, but this company from Romania specializes in pressing various sodas from roses, elder flowers (Edelweiss), and even cucumber. Who in their right mind would drink a soda made of cucumber?! My thoughts exactly, but after I had a taste of it, I was amazed. IT was light, tasty, and while it definitely gave me a hint of a garden salad in a soda bottle, it was surprisingly good. We bought a bottle to bring home, and after I finished it, I wanted to kick myself for not buying more (John has bought the entire run of this companies flower sodas, so the only place to get these in the states is through John and Galco's. The bottler in question actually couldn't find a store who would give the cucumber soda a chance, but John said "why not" and has been glad that he did.
Key takeaway: be willing to experiment, and try something that on the surface seems crazy. Given a choice, people will gravitate towards the idea if they can try it for themselves.
It was so much fun watching my kids, my wife, and yes, me as well go through the store and just get totally lost in the options available. In addition to my beloved Blenheim #3 and the discovery of Jamaica's Finest, John also introduced me to an amazing soda that he said is probably the rarest one he carries called Ouzon. It's a soda pop that tastes like Ouzo, but without the alcohol. Seriously, this one floored me. It's really light, and has a light flavor of anise that if the main ingredient of Ouzo. It's really hard to describe, but it was something so surprising and, yes, enjoyable, that I will definitely keep my eye out for it (that is, if anyone other than John and Galco's actually carries it).
Key takeaway: It helps to have some rare skills and rare talents up your sleeve. They may not come into play every day, but it's a good bet that when you meet those people that are looking for what you have, they will be delighted that you have it, and will see that talent out again.
Yep, inspiration can come in some fascinating places, even a soda pop shop... but man, *what* a Soda Pop Shop :)!!!
Sunday, November 28, 2010
BOOK CLUB: How We Test Software at Microsoft (3/16)
Well, I overestimated the amount of time I'd get to put this together (traveling for the holidays left me little time to write up this review; for a relatively short chapter, it’s rather meaty :) ), so this is appearing a little later than intended.
Chapter 3. Engineering Life Cycles
In this chapter, Alan makes a comparison between engineering methodologies and cooking. The ideas that work for a single person need to be modified for a family and what works for a family requires a bit more specific discipline when cooking for 100 (the measurements for ingredients need to be more precise). Software is the same; the conditions that will work or be acceptable for a small group require a different approach when made for a large audience.
Software Engineering at Microsoft
Microsoft doesn't use just "one model" when creating software. Time to market, new innovation or unseating a competitor will require the ability to use different approaches if necessary. Testers need to understand the differences between common engineering models and how they interact with them to be successful.
Waterfall Model
The Waterfall Model is an approach to software development where the end of one phase coincides with the beginning of the next phase; Requirements flows into Program design, which flows into Implementation/Coding, which flows into Testing and then flows into Maintenance. One advantage is that when you begin a phase, ideally, each previous phase is complete. Another benefit is requires design to be completed before coding starts. A disadvantage is that it doesn’t really allow phases to repeat. If there are issues found in testing, going back to the design stage can be difficult if not impossible. That's now how its founder, Winston Royce planned it.
"An interesting point about waterfall is that the inventor, Winston Royce, intended for waterfall to be an iterative process. Royce’s original paper on the model discusses the need to iterate at least twice and use the information learned during the early iterations to influence later iterations. Waterfall was invented to improve on the stage-based model in use for decades by recognizing feedback loops between stages and providing guidelines to minimize the impact of rework. Nevertheless, waterfall has become somewhat of a ridiculed process among many software engineers—especially among Agile proponents. In many circles of software engineering, waterfall is a term used to describe any engineering system with strict processes." - Alan Page
In many ways, the waterfall model is what was used in the early part of my career. I don't think I've ever been on a project where true and strict adherence to the waterfall model was practiced. We used a variation that I jokingly refer to as the "waterfall/whirlpool". Usually we would go through several iterations of the coding and testing to make sure that we got the system right (or as close to right as we could make it) so that we could ship the product.
Spiral Model
In 1988, Barry Boehm proposed a model based on the idea of a spiral. Spiral development is iterative, and contains four main phases: determine objectives, evaluate risks, engineering, and planning the next iteration.
• Determine objectives: Focus on and define the deliverables for the current phase of the project.
• Evaluate Risks: What risks do we face, such as delays or cost overruns, and how can we minimize or avoid them completely.
• Engineering: The actual heavy lifting (requirements, design, coding, testing).
• Planning: Review the project, and make plans for the next round.
This is more akin to what I do today. It sounds almost agile-like, and it's driven mostly by customers who need something special and our willingness to listen and provide it. It's not as rigid as a true waterfall project, but it doesn't quite have the hallmarks of an agile team. Still, it's a functional approach and it can work well with small teams.
Agile Methodologies
Agile is a popular method currently in use. There are many different approaches to Agile, but the following attributes tend to be present in all of them:
The key idea behind Agile is that the development teams can quickly change direction if necessary. With a focus on "always working" code, each change can be made in small increments and the effects can be known rapidly. The goal of Agile is to do a little at a time rather than everything at once.
This is the way that my company is looking to head for future product development, but we still have a scattering of legacy product code that will not be able to be fully integrated into agile practices. We have done a couple of iterations with our most recent products using and agile approach and so far, it looks promising.
Other Models
There are dozens of models of software development. There isn’t a best model, but understanding the model used and creating software within the bounds of the determined model you choose can help to create a quality product.
Milestones
The milestone schedule establishes the time line for the project, and key dates where project deliverables are due. The milestone model makes clear that specific, predefined criteria must be met. The criteria typically include items such as the following:
Milestones give the developers and the tester a chance to ask questions about the product, and determine how close to "go" they really are. Milestone releases should be an opportunity to evaluate an entire product, not just standalone pieces.
The Quality Milestone
This is a side story that Alan uses to talk about a topic often referred to as "technical debt" and he refers to Matt Heusser and his writings on the subject here (xndev.blogspot.com). The key takeaway is the notion of what happens when a large quantity of bugs are deferred until the next release, or shortcuts are taken to get something out that works but doesn't fulfill 100% of what it should. That shortcut will surely come back and rear its ugly head in the form of technical debt, which means that you are betting on your organization's "future self" to fix the issues, much the way that individuals expect their "future selves" to pay for their car, student loans, or credit card balances. Technical debt and consumer debt have a very big commonality and danger... just how reliable is your future self? We all love to believe that we will be better able to deal with the issues in the future, but how often has that proven to truly be the case? Alan argues that the Quality Milestone is a mid-way point between dealing with everything now and putting yourself or your organization at the mercy of the "future self".
Agile at Microsoft and Feature Crews
Alan states that Agile methodologies are popular within Microsoft, and that the popularity is growing. Agile however is best suited to smaller teams (around 10 or so). Microsoft has a number of large initiatives that have thousands of developers. To meet this challenge, Microsoft scales Agile practices to large teams by using what it calls "feature crews".
Feature crews are designed to meet the following goals:
As an example, for the Office 2007 project, there were more than 3,000 feature crews.
The feature crew writes the necessary code, publishes private releases, tests, and iterates while the issues are fresh. When the team meets the goals of the quality gates, they migrate their code to the main product source branch and move on to the next feature. Code grows to integrate functions. Functions grow to become features, and features grow to become a project. Projects have defined start and end dates with milestone checkpoints along the way. At the top level, groups of related projects become a product line. Microsoft Windows is a product line; Windows 7 is a project within that product line, with hundreds of features making up the project.
Process Improvement
Most testers at one point or another come into contact with Dr. W. Edwards Deming's PDCA cycle. PDCA stands for "Plan, Do, Check, Act":
Simple enough, right? Common sense, you might say. Perhaps, but simple can be very powerful if implemented correctly. Alan uses the example of issues that were found by testers during late testing that could have been found with better code review. By planning a process around code reviews, the group then performs code reviews during the next milestone. Then over the course of the next round of testing, the group monitors the issue tracker to see if the changes have made an impact on the issue reported in the later stages of testing. Finally, they review the entire process, metrics, and results see if the changes they made had enough of an effect to become standard practice.
Microsoft and ISO 9000
Oh Alan, I feel for you! As a young tester, I watched Cisco go through the process over a few years of standardizing everything to meet ISO 9000 requirements, and honestly, I wonder if it was wholly worth the investment. Granted, Cisco is one of the biggest tech companies in the world, and at the time they were going through this certification process, they were doubling every year if not more so. Still, in many ways, I think the compliance was a trade-off that hampered Cisco's at the time legendary ability to innovate in a nimble and rapid fashion, and testing became a very heavy and process oriented situation. I cannot speak for their current processes, as I've been away from them for a decade now, but I think they managed to find ways to meet the letter of the ISO 9000 certifications yet still do what was necessary to be innovative and adapt as needed. From Alan's description it sounds like Microsoft has done the same thing.
Shipping Software from the War Room
Is a product ready to be released? Does the product meet requirements? At Microsoft, these decisions are analyzed in the "War Room". This seems to be a pretty common metaphor, as it's a phrase that has been used in just about every company I've ever worked with, so it's not just a Microsoft phenomenon. The idea is that a "War Team" meets throughout the product cycle and examines the product quality during the release cycle. The War team is tasked with looking to see which features get approved or cut, which bugs get fixed or punted, whether a team or teams need more people or resources, and whether to stick to or move the release date. At Microsoft, typically the war team is made up of one representative from each area of the product.
Alan uses the following suggestions to make the most of any war room meeting:
The next installment will deal with Chapter 4, and that will be posted on Tuesday.
Chapter 3. Engineering Life Cycles
In this chapter, Alan makes a comparison between engineering methodologies and cooking. The ideas that work for a single person need to be modified for a family and what works for a family requires a bit more specific discipline when cooking for 100 (the measurements for ingredients need to be more precise). Software is the same; the conditions that will work or be acceptable for a small group require a different approach when made for a large audience.
Software Engineering at Microsoft
Microsoft doesn't use just "one model" when creating software. Time to market, new innovation or unseating a competitor will require the ability to use different approaches if necessary. Testers need to understand the differences between common engineering models and how they interact with them to be successful.
Waterfall Model
The Waterfall Model is an approach to software development where the end of one phase coincides with the beginning of the next phase; Requirements flows into Program design, which flows into Implementation/Coding, which flows into Testing and then flows into Maintenance. One advantage is that when you begin a phase, ideally, each previous phase is complete. Another benefit is requires design to be completed before coding starts. A disadvantage is that it doesn’t really allow phases to repeat. If there are issues found in testing, going back to the design stage can be difficult if not impossible. That's now how its founder, Winston Royce planned it.
"An interesting point about waterfall is that the inventor, Winston Royce, intended for waterfall to be an iterative process. Royce’s original paper on the model discusses the need to iterate at least twice and use the information learned during the early iterations to influence later iterations. Waterfall was invented to improve on the stage-based model in use for decades by recognizing feedback loops between stages and providing guidelines to minimize the impact of rework. Nevertheless, waterfall has become somewhat of a ridiculed process among many software engineers—especially among Agile proponents. In many circles of software engineering, waterfall is a term used to describe any engineering system with strict processes." - Alan Page
In many ways, the waterfall model is what was used in the early part of my career. I don't think I've ever been on a project where true and strict adherence to the waterfall model was practiced. We used a variation that I jokingly refer to as the "waterfall/whirlpool". Usually we would go through several iterations of the coding and testing to make sure that we got the system right (or as close to right as we could make it) so that we could ship the product.
Spiral Model
In 1988, Barry Boehm proposed a model based on the idea of a spiral. Spiral development is iterative, and contains four main phases: determine objectives, evaluate risks, engineering, and planning the next iteration.
• Determine objectives: Focus on and define the deliverables for the current phase of the project.
• Evaluate Risks: What risks do we face, such as delays or cost overruns, and how can we minimize or avoid them completely.
• Engineering: The actual heavy lifting (requirements, design, coding, testing).
• Planning: Review the project, and make plans for the next round.
This is more akin to what I do today. It sounds almost agile-like, and it's driven mostly by customers who need something special and our willingness to listen and provide it. It's not as rigid as a true waterfall project, but it doesn't quite have the hallmarks of an agile team. Still, it's a functional approach and it can work well with small teams.
Agile Methodologies
Agile is a popular method currently in use. There are many different approaches to Agile, but the following attributes tend to be present in all of them:
- Multiple, short iterations: Deliver working software frequently through "sprints".
- Emphasis on face-to-face communication and collaboration: Less walls, more direct communication and sharing of efforts.
- Adaptability to changing requirements: When the requirements need to change, the short iterations make it possible for quick adjustment and the ability to include small changes in each sprint.
- Quality ownership throughout the product: an emphasis on test-driven development (TDD) and prevalent unit testing of code so that developers are doing specific tests to ensure that their code does what they claim it does.
The key idea behind Agile is that the development teams can quickly change direction if necessary. With a focus on "always working" code, each change can be made in small increments and the effects can be known rapidly. The goal of Agile is to do a little at a time rather than everything at once.
This is the way that my company is looking to head for future product development, but we still have a scattering of legacy product code that will not be able to be fully integrated into agile practices. We have done a couple of iterations with our most recent products using and agile approach and so far, it looks promising.
Other Models
There are dozens of models of software development. There isn’t a best model, but understanding the model used and creating software within the bounds of the determined model you choose can help to create a quality product.
Milestones
The milestone schedule establishes the time line for the project, and key dates where project deliverables are due. The milestone model makes clear that specific, predefined criteria must be met. The criteria typically include items such as the following:
- "Code complete" on key functionality: all functionality is in place, if not fully tested
- Interim test goals accomplished: Verify code coverage or test case coverage goals are met.
- Bug goals met: We determine that there are no catastrophic bugs in the system.
- Nonfunctional goals met: Perhaps better stated as "para-functional" testing, where such things as usability, performance, load and human factors testing have been completed.
Milestones give the developers and the tester a chance to ask questions about the product, and determine how close to "go" they really are. Milestone releases should be an opportunity to evaluate an entire product, not just standalone pieces.
The Quality Milestone
This is a side story that Alan uses to talk about a topic often referred to as "technical debt" and he refers to Matt Heusser and his writings on the subject here (xndev.blogspot.com). The key takeaway is the notion of what happens when a large quantity of bugs are deferred until the next release, or shortcuts are taken to get something out that works but doesn't fulfill 100% of what it should. That shortcut will surely come back and rear its ugly head in the form of technical debt, which means that you are betting on your organization's "future self" to fix the issues, much the way that individuals expect their "future selves" to pay for their car, student loans, or credit card balances. Technical debt and consumer debt have a very big commonality and danger... just how reliable is your future self? We all love to believe that we will be better able to deal with the issues in the future, but how often has that proven to truly be the case? Alan argues that the Quality Milestone is a mid-way point between dealing with everything now and putting yourself or your organization at the mercy of the "future self".
Agile at Microsoft and Feature Crews
Alan states that Agile methodologies are popular within Microsoft, and that the popularity is growing. Agile however is best suited to smaller teams (around 10 or so). Microsoft has a number of large initiatives that have thousands of developers. To meet this challenge, Microsoft scales Agile practices to large teams by using what it calls "feature crews".
Feature crews are designed to meet the following goals:
- It is independent enough to define its own approach and methods.
- It can drive a component from definition, development, testing, and integration to a point that shows value to the customer.
As an example, for the Office 2007 project, there were more than 3,000 feature crews.
The feature crew writes the necessary code, publishes private releases, tests, and iterates while the issues are fresh. When the team meets the goals of the quality gates, they migrate their code to the main product source branch and move on to the next feature. Code grows to integrate functions. Functions grow to become features, and features grow to become a project. Projects have defined start and end dates with milestone checkpoints along the way. At the top level, groups of related projects become a product line. Microsoft Windows is a product line; Windows 7 is a project within that product line, with hundreds of features making up the project.
Process Improvement
Most testers at one point or another come into contact with Dr. W. Edwards Deming's PDCA cycle. PDCA stands for "Plan, Do, Check, Act":
- Plan: Plan ahead, analyze, establish processes, predict results.
- Do: Execute on the plan and processes.
- Check: Analyze the results (note that Deming later changed the name of this stage to "Study" to be more clear).
- Act: Review all steps and take action to improve the process.
Simple enough, right? Common sense, you might say. Perhaps, but simple can be very powerful if implemented correctly. Alan uses the example of issues that were found by testers during late testing that could have been found with better code review. By planning a process around code reviews, the group then performs code reviews during the next milestone. Then over the course of the next round of testing, the group monitors the issue tracker to see if the changes have made an impact on the issue reported in the later stages of testing. Finally, they review the entire process, metrics, and results see if the changes they made had enough of an effect to become standard practice.
Microsoft and ISO 9000
Oh Alan, I feel for you! As a young tester, I watched Cisco go through the process over a few years of standardizing everything to meet ISO 9000 requirements, and honestly, I wonder if it was wholly worth the investment. Granted, Cisco is one of the biggest tech companies in the world, and at the time they were going through this certification process, they were doubling every year if not more so. Still, in many ways, I think the compliance was a trade-off that hampered Cisco's at the time legendary ability to innovate in a nimble and rapid fashion, and testing became a very heavy and process oriented situation. I cannot speak for their current processes, as I've been away from them for a decade now, but I think they managed to find ways to meet the letter of the ISO 9000 certifications yet still do what was necessary to be innovative and adapt as needed. From Alan's description it sounds like Microsoft has done the same thing.
Shipping Software from the War Room
Is a product ready to be released? Does the product meet requirements? At Microsoft, these decisions are analyzed in the "War Room". This seems to be a pretty common metaphor, as it's a phrase that has been used in just about every company I've ever worked with, so it's not just a Microsoft phenomenon. The idea is that a "War Team" meets throughout the product cycle and examines the product quality during the release cycle. The War team is tasked with looking to see which features get approved or cut, which bugs get fixed or punted, whether a team or teams need more people or resources, and whether to stick to or move the release date. At Microsoft, typically the war team is made up of one representative from each area of the product.
Alan uses the following suggestions to make the most of any war room meeting:
- Ensure that the right people are in the room. Missing representation is bad, but too many people can be just as bad.
- Don’t try to solve every problem in the meeting. If an issue comes up that needs more investigation, assign it to someone for follow-up and move on.
- Clearly identify action items, owners, and due dates.
- Have clear issue tracking—and address issues consistently. Over time, people will anticipate the flow and be more prepared.
- Be clear about what you want. Most ship rooms are focused and crisp. Some want to be more collaborative. Make sure everyone is on the same page. If you want it short and sweet, don’t let discussions go into design questions, and if it’s more informal, don’t try to cut people off.
- Focus on the facts rather than speculation. Words like "I think," "It might," "It could" are red flags. Status is like pregnancy—you either are or you aren’t; there’s no in between.
- Everyone’s voice is important. A phrase heard in many war rooms is "Don’t listen to the HiPPO"—where HiPPO is an acronym for highest-paid person’s opinion.
- Set up exit criteria in advance at the beginning of the milestone, and hold to them. Set the expectation that quality goals are to be adhered to.
- One person runs the meeting and keeps it moving in an orderly manner.
- It’s OK to have fun.
The next installment will deal with Chapter 4, and that will be posted on Tuesday.
Thursday, November 25, 2010
BOOK CLUB: How We Test Software At Microsoft (2/16)
This is the next installment in the "book club" approach to reviewing "How We Test Software at Microsoft". This installment covers chapter 2.
Chapter 2. Software Test Engineers at Microsoft
Microsoft didn’t have dedicated testers until several years into the life of the company and even then, as a distinct a career part it took time for that to develop. Microsoft claims that a high school intern named Lloyd Frink was the first tester, hired in June 1979. The first full-time tester is said to have been hired in 1983, with a greater emphasis on hiring testers starting in 1985.
Usability was one of the first testing disciplines to come to the fore, specifically around aspects of Microsoft Word such as Mail Merge and other features that were more challenging than others and required more detailed testing to make sure that the system behaved as expected and was accessible to the users.
Microsoft's developer title is officially Software Development Engineer (SDE). The formal title for software testers at Microsoft is Software Development Engineer in Test (SDET). Microsoft considers testers to be developers, and an expectation that testers know how to code is emphasized. Testers design tests, influence product design, conduct root cause analysis, participate in code reviews, and write automation. Microsoft sees its biggest differentiator from other companies being that they deliberately seek coders to be testers, not necessarily because they are looking for automation experts and to automate all tests, but because testers who understand computer architecture and software development will have stronger analysis skills than those testers that are not as adept at coding.
Microsoft does not specifically hire subject matter experts in many areas, thought they do a hire some. The testers are actually trained to be an SME for the product area they will be working with, in addition to learning how to adequately and effectively test the application. Microsoft likes to focus on what is considered the “Tester’s DNA”:
"Tester DNA has to include a natural ability to do systems level thinking, skills in problem decomposition, a passion for quality, and a love of finding out how something works and then how to break it," he put the marker down and looked at the room. "Now that is what makes up a tester that makes them different from a developer. The way we combine that DNA with engineering skills is by testing software. The name we choose should reflect this but also be attractive to the engineers we want to hire. Something that shows we use development skills to drive testing."
–Grant George
Now, I’m going to opine for a minute here (hey, it’s my review, I’m allowed :) )… in some ways, I think what Microsopft has done is brilliant, in that it allows a path for software testers to work towards and develop solid coding skills, and that is good if that’s what a tester wants to do. However, it has likewise been picked up by many other organizations and a “me, too” attitude that testers will be coders, with little understanding that not all testers, even technical and seasoned testers, necessarily want to be software developers, or for that matter need to be. Many of us don’t and we look at coding as a skill we need to be aware of and have some knowledge about, but not necessarily be the prime focus of our efforts. Nevertheless, that is the approach that Microsoft takes, and that’s reasonable for their organization and their goals.
The authors state that one of the primary motivations in this direction was the lengthy licensing and support agreements they had for software and applications (3 years at the consumer level, 10 years at the corporate/enterprise level). There’s also no question that the SDET title is a strong recruitment tool; many people who develop software and may choose not to go into full time software development may consider the SDET track to be a good one to follow.
Microsoft looks for what it considers to be its “ten core competencies” when it hires engineers, regardless of whether they will be SDE or SDET:
• Analytical Problem Solving Can the individual in question solve difficult problems?
• Customer-Focused Innovation Does the individual care about the customers and look to see how to help solve their challenges?
• Technical Excellence To testers understand the underlying operating system and networking code, and do they understand how to optimize code?.
• Project Management Can tester sdemonstrate appropriate time management and prove they can deliver on their goals in a timely manner?.
• Passion for Quality In short, you really have to care about product that works and works well.
• Strategic Insight Can you help find that next big thing that shapes the future?
• Confidence You may get pusback from developers, can you stand up to it and push back as well when it’s warranted?
• Impact and Influence Are you an agent for change? Can you articulate how you are?
• Cross-Boundary Collaboration this translates to “how well do you play with others”?
• Interpersonal Awareness Can you be critical about your progreess? Do you see areas that need improvement, and once you identify them, will you act to make the improvement happen?
The SDET role at Microsoft is considered to be one of the most important we have, and carries an enormous amount of responsibility in shipping the products we do. When I first start talking to the candidates, I really want them to understand that testing is just as much a science as writing code and understanding the algorithms. I start talking to them about the wide range of things that testers must be able to do.
--Patrick Patterson, Microsoft Office Test Manager
Microsoft says that about half of their hiring comes from college graduates, the other half from within the software industry. Their ideal industry candidate is someone who works where the role of product developer and software tester are combined. Microsoft’s developer to test ratio is astounding, approaching 1 to 1. This is a far cry from other industries, where the ratio of developers to testers is much higher, often 5 developers to 1 tester or even 10 to 1 or greater (in the spirit of full disclosure, the company that I work with has a ratio of 8 to one, but then I am the only tester at my company… I told you this comparison would at times be mind boggling to me :) ).
So what does Microsoft recommend for the aspiring software tester that wants to become a SDET? After employees go through what Microsoft calls New Employee Orientation (NEO), each tester works with the Engineering Excellence (EE) group to receive additional technical training. “Testing at Microsoft for SDETs” is usually taken within 12 months of starting at Microsoft. This is a 24-hour course, and much of the topics are covered somewhat in “How we Test Software At Microsoft”. From there, a significant amount of training is a vailable for the SDET to advance and develop along their career path, with much of the training supplied in-house by Microsoft, in the way of software development classes, languages, architecture, and advanced testing topics.
Microsoft has basically two main roles, individual contributors (IC) or managers. Depending on the career path chosen there are various “career stages”. Some testers try out management, and then decide to come back to being IC’s. Often, senior engineers jump back and forth between being IC’s and managers.
Microsoft adds somewhere in the neighborhood of 500 testers every year (AOP). The 9,000 test engineers at Microsoft play a large role in helping to ensure that the products ship with high quality. The variety of opportunities and the ability to grow within the company is limited only by the individuals own abilities to rise and meet the challenges.
We will pick up again with Chapter 3 on Saturday. See you then!
Chapter 2. Software Test Engineers at Microsoft
Microsoft didn’t have dedicated testers until several years into the life of the company and even then, as a distinct a career part it took time for that to develop. Microsoft claims that a high school intern named Lloyd Frink was the first tester, hired in June 1979. The first full-time tester is said to have been hired in 1983, with a greater emphasis on hiring testers starting in 1985.
Usability was one of the first testing disciplines to come to the fore, specifically around aspects of Microsoft Word such as Mail Merge and other features that were more challenging than others and required more detailed testing to make sure that the system behaved as expected and was accessible to the users.
Microsoft's developer title is officially Software Development Engineer (SDE). The formal title for software testers at Microsoft is Software Development Engineer in Test (SDET). Microsoft considers testers to be developers, and an expectation that testers know how to code is emphasized. Testers design tests, influence product design, conduct root cause analysis, participate in code reviews, and write automation. Microsoft sees its biggest differentiator from other companies being that they deliberately seek coders to be testers, not necessarily because they are looking for automation experts and to automate all tests, but because testers who understand computer architecture and software development will have stronger analysis skills than those testers that are not as adept at coding.
Microsoft does not specifically hire subject matter experts in many areas, thought they do a hire some. The testers are actually trained to be an SME for the product area they will be working with, in addition to learning how to adequately and effectively test the application. Microsoft likes to focus on what is considered the “Tester’s DNA”:
"Tester DNA has to include a natural ability to do systems level thinking, skills in problem decomposition, a passion for quality, and a love of finding out how something works and then how to break it," he put the marker down and looked at the room. "Now that is what makes up a tester that makes them different from a developer. The way we combine that DNA with engineering skills is by testing software. The name we choose should reflect this but also be attractive to the engineers we want to hire. Something that shows we use development skills to drive testing."
–Grant George
Now, I’m going to opine for a minute here (hey, it’s my review, I’m allowed :) )… in some ways, I think what Microsopft has done is brilliant, in that it allows a path for software testers to work towards and develop solid coding skills, and that is good if that’s what a tester wants to do. However, it has likewise been picked up by many other organizations and a “me, too” attitude that testers will be coders, with little understanding that not all testers, even technical and seasoned testers, necessarily want to be software developers, or for that matter need to be. Many of us don’t and we look at coding as a skill we need to be aware of and have some knowledge about, but not necessarily be the prime focus of our efforts. Nevertheless, that is the approach that Microsoft takes, and that’s reasonable for their organization and their goals.
The authors state that one of the primary motivations in this direction was the lengthy licensing and support agreements they had for software and applications (3 years at the consumer level, 10 years at the corporate/enterprise level). There’s also no question that the SDET title is a strong recruitment tool; many people who develop software and may choose not to go into full time software development may consider the SDET track to be a good one to follow.
Microsoft looks for what it considers to be its “ten core competencies” when it hires engineers, regardless of whether they will be SDE or SDET:
• Analytical Problem Solving Can the individual in question solve difficult problems?
• Customer-Focused Innovation Does the individual care about the customers and look to see how to help solve their challenges?
• Technical Excellence To testers understand the underlying operating system and networking code, and do they understand how to optimize code?.
• Project Management Can tester sdemonstrate appropriate time management and prove they can deliver on their goals in a timely manner?.
• Passion for Quality In short, you really have to care about product that works and works well.
• Strategic Insight Can you help find that next big thing that shapes the future?
• Confidence You may get pusback from developers, can you stand up to it and push back as well when it’s warranted?
• Impact and Influence Are you an agent for change? Can you articulate how you are?
• Cross-Boundary Collaboration this translates to “how well do you play with others”?
• Interpersonal Awareness Can you be critical about your progreess? Do you see areas that need improvement, and once you identify them, will you act to make the improvement happen?
The SDET role at Microsoft is considered to be one of the most important we have, and carries an enormous amount of responsibility in shipping the products we do. When I first start talking to the candidates, I really want them to understand that testing is just as much a science as writing code and understanding the algorithms. I start talking to them about the wide range of things that testers must be able to do.
--Patrick Patterson, Microsoft Office Test Manager
Microsoft says that about half of their hiring comes from college graduates, the other half from within the software industry. Their ideal industry candidate is someone who works where the role of product developer and software tester are combined. Microsoft’s developer to test ratio is astounding, approaching 1 to 1. This is a far cry from other industries, where the ratio of developers to testers is much higher, often 5 developers to 1 tester or even 10 to 1 or greater (in the spirit of full disclosure, the company that I work with has a ratio of 8 to one, but then I am the only tester at my company… I told you this comparison would at times be mind boggling to me :) ).
So what does Microsoft recommend for the aspiring software tester that wants to become a SDET? After employees go through what Microsoft calls New Employee Orientation (NEO), each tester works with the Engineering Excellence (EE) group to receive additional technical training. “Testing at Microsoft for SDETs” is usually taken within 12 months of starting at Microsoft. This is a 24-hour course, and much of the topics are covered somewhat in “How we Test Software At Microsoft”. From there, a significant amount of training is a vailable for the SDET to advance and develop along their career path, with much of the training supplied in-house by Microsoft, in the way of software development classes, languages, architecture, and advanced testing topics.
Microsoft has basically two main roles, individual contributors (IC) or managers. Depending on the career path chosen there are various “career stages”. Some testers try out management, and then decide to come back to being IC’s. Often, senior engineers jump back and forth between being IC’s and managers.
Microsoft adds somewhere in the neighborhood of 500 testers every year (AOP). The 9,000 test engineers at Microsoft play a large role in helping to ensure that the products ship with high quality. The variety of opportunities and the ability to grow within the company is limited only by the individuals own abilities to rise and meet the challenges.
We will pick up again with Chapter 3 on Saturday. See you then!
Labels:
BOOK CLUB,
books,
career development,
education,
goals,
learning,
life experience,
people,
philosophy,
skills development,
software testing,
teaching,
testing techniques,
training
Wednesday, November 24, 2010
TWiST-Plus with Liz Marley and Denise Holmes
So this week we decided to take a break from doing the regular TWiST podcast due to people traveling and genrally not being around, but I still have a bunch of audio from the Pacific Northwest Software Quality Conference I’m waiting to get out there. With this, I suggested we could just do a TWiST-Plus edition and I could combine a couple of the poster paper presentations. And that’s exactly what we did :).
So for today’s episode (note, this is early than normal because I’m not going to be around for Friday), I put together two pieces fom poster paper presentations that were held at PNSQC. Poster paper presentations are presentations made on, you guessed it, presentation board displays. They stand in front of the boad and talk to people asthey come by about their presentation. It’s a more low key way to make a presentation than presenting an entire paper, but still gives the opportunity to talk about the technology or idea to as many people who are interested as they can during the “social hours” of the conference.
The first presentation is from Liz Marley of The Omni Group. Her poster paper covered “Touchy Testing” and was dedicated to the questions around “how do we effectively test applications for devices like the iPhone, IPad and Android?" "What do we need to do differently?" The second poster paper presentation was with Denise Holmes of Edge Leadership Consulting, and covered the idea of Polarities when dealing with business decisions. This was my first exposure to this idea, and I thought it was very interesting, the idea that weighing the pros and cons for decisions is not really sitting on one side of an issue or another, but actually an infinity shaped feedback loop that goes through four quadrants.
Anyway, click the following link to listen to A Special Thanksgiving TWiST-Plus.
Standard disclaimer:
Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.
TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.
Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).
Tuesday, November 23, 2010
BOOK CLUB: How We Test Software at Microsoft (1/16)
Since I started this blog, one of the positives I’ve heard back from readers about is the fact that I would do a weekly book review. I would go into some depth on why I think it’s worthwhile, usually with a chapter by chapter breakdown.
This approach to book reviews is not mine. I actually first saw it done this way by Trent Hamm over at The Simple Dollar, and liked the format so much I adopted the approach for TESTHEAD, right down to trying to do a review each week.
Some of you may notice that I’ve fallen off the wagon with the weekly book reviews. There are a few reasons. The first is that I’ve already gone through most of the books that I have that are related to testing, and the second is that there are a number of books that, while I’d like to review and cover, deserve more than a week to read, digest and comment on.
This brings me to Trent and the Simple Dollar again. Occasionally, he does a multi-post book review in more of a “reader’s club” format, where his review is not of the entire book, but of each chapter, spread out over several weeks. I’ve decided to do exactly this, to give others a chance to consider the chapters, but more to the point explain what I felt was of value regarding the chapters and what my takeaways were, in a way that a quick five sentence summary paragraph just can’t do.
Alan Page made the decision to feature the first book in this format easy when he gave me a copy of “How We Test Software at Microsoft” (Microsoft Press, 12/2008, 978-0-7356-2425-2), co-written with Ken Johnson and B.J. Rollinson. Since I promised Alan I’d give him a good and thorough review, I felt there was no better book to start the “TESTHEAD BOOK CLUB” than this one :).
Introduction:
Alan tells us why he decided to write this book and why he felt that another book about software testing was valuable. In addition, he describes the training that Microsoft gives its testers and makes a case that it’s the format of the training, with its stories, anecdotes and odd pieces of trivia that make it memorable and ultimately effective for its participants. In short, while the information is helpful, it’s the stories that are embedded with the techie stuff that makes it worthwhile.
In How We Test Software at Microsoft, Alan has separated the book into four themes. The first deals with Microsoft in general, and what makes it an interesting eco-system in regards to its practices with people and engineering. From there, a section would be dedicated to the actual methods of testing employed by Microsoft. The third section would cover the tools that are used to help make that testing a reality. The final section deals with what Alan and the authors see as the future of testing at Microsoft.
Alan makes the point that this book is not just his, and that Ken and B.J. have a significant impact on the content in the book. Each of their styles comes through in the book, and they all make their mark. Alan described B.J. as being the “professor” of the bunch. Ken is the “historian and storyteller”. Alan is the Agent Joe Friday of the group with the “just the fact, maam” attitude.
Alan also makes it clear from the outset that this book cannot possibly take every aspect of testing at a company the size of Microsoft and do it justice, not can he claim that everything in the book is to the letter what testers at Microsoft do all the time, but it represents a good cross section of the company and the way that they approach software testing, the methodologies they use, how they are applied, and what they feel works for them.
In addition, there is a web site specific to book and commentary related to it at http://www.hwtsam.com/, and there are, at the time of my writing this, 45 entries, so feel free to check the site out if you’d like to read more about the book and commentary related to it from the source.
Also, so as to keep everyone on the same page, there will be times where I’m going on what the book is saying, but it may not be the latest and greatest reality (it’s been two years since the book was published, so note in my review when I use the parenthetical “(AOP)” it means “As Of Publishing”).
Chapter 1: Software Engineering at Microsoft
This is a brief overview on the way that Software Engineering is conducted at Microsoft, and some of the key aspects that help to make Microsoft culture what it is. Microsoft has gone through changes in its approach and goals, starting with the vision of “a PC on every desk and in every home”, and then moving on to “Empowering people through great software – any time, any place, and on any device”. In 2002, the vision shifted "To enable people and businesses throughout the world to realize their full potential." At the time of the writing of the book, Microsoft’s vision statement was "Create experiences that combine the magic of software with the power of Internet services across a world of devices." This goes well beyond the original idea of a PC on every desk, and it shows Microsoft’s willingness to adapt to a changing marketplace (and for that matter, the necessity to do so).
I’ve only worked for one really large company in my career, (my definition of really large meaning a company with more that 10,000 people in it), and that would be Cisco Systems, though some could argue that Konami fits the bill as well with over 5000. YMMV. The idea of working for a company as immense as Microsoft honestly boggles my mind. There are three main product divisions in Microsoft (AOP):
Each of these divisions is reported to earn $10 billion to $20 billion in revenue, making each of these divisions larger that many Fortune 500 companies!
Alan makes the first of the diversion in the book to tell a story about T-shirts and the joke that “when a T-shirt has been made, it’s a sign that reorganization is in the works”. Having had the big company experience with Cisco Systems in the 1990’s, I can very much relate to this story. It also put a smile on my face and proved Alan’s point in the introduction… five years from now, I’m probably not going to remember the details of Microsoft’s organizational structure, but I’ll smile when I think of the T-shirt story, because it reminds me of my own experiences.
Think about the sheer numbers at Microsoft… they employ more than 90,000 people worldwide (AOP), and the point made about Microsoft’s size is their product breadth. More to the point, this breadth has unique engineering challenges that need to be met. Microsoft has applications, games, peripherals and devices in just about every conceivable software market, ranging from Datacenter Server Clustering software to Halo 3. Key to this is the understanding that there is no one way to build and ship products. Relevant to the tester, that also means that there is no one way to test their products, either. There is a set of guiding principles and ideas that they use, some broad enough to cover all product lines, some specific enough that they are only relevant to their local application.
Irada Sadyknhova shares an anecdote where shipping a product with Microsoft is a lot like producing a play. There are directors, producers, actors (the example likens the engineers to the actors), and while a performance of a play is about the art of the play, the end goal is to sell tickets to their performances. Likewise, Microsoft orchestrates large scale shipping product launches with one key goal in mind, to find an audience willing to use, become loyal to and pay for the experience.
“The key to the whole analogy is balancing the contradictions of a large software company that has to produce big profits with high margins and be dynamic and creative at the very same time. There is some inherent conflict between large production scale and creativity; balancing them successfully is a core of the success of Microsoft. No Google, or Apple, or Sun has quite yet mastered this challenge at scale. Only the likes of Cirque du Soleil and Microsoft have proven they can do it.”
—Irada Sadykhova (Director, Engineering Learning & Organization Effectiveness – Engineering Excellence)
Microsoft also champions the ideas of looking for that next new market or idea, realizing that not all of them are going to pan out. Many times, product enhancements that we see in the final version started out as incubation projects, often in very small groups as just an idea, and allowed to develop and grow in influence as the application or approach demonstrated that it had “legs” to stand on. The Bill Gates Think Week is another example, where White papers are presented and Bill Gates reads and comments on hem twice each year. Often these papers become incubation projects as well.
Microsoft employs 10 different types of engineering disciplines. They are as follows:
Microsoft has development centers spread out all over the world, with a bulk of their engineering taking place in the U.S. (about 73% in and around Redmond, Washington, USA), but also including development offices in places like India, Ireland, United Kingdom, Denmark, Japan, China and Israel. They predict that a larger part of that development work will shift to other geographical areas, and that shift will increase in the coming years.
Coming up on Thursday, I will cover Chapter #2, which is about Software Test Engineers at Microsoft. Stay tuned until then :).
This approach to book reviews is not mine. I actually first saw it done this way by Trent Hamm over at The Simple Dollar, and liked the format so much I adopted the approach for TESTHEAD, right down to trying to do a review each week.
Some of you may notice that I’ve fallen off the wagon with the weekly book reviews. There are a few reasons. The first is that I’ve already gone through most of the books that I have that are related to testing, and the second is that there are a number of books that, while I’d like to review and cover, deserve more than a week to read, digest and comment on.
This brings me to Trent and the Simple Dollar again. Occasionally, he does a multi-post book review in more of a “reader’s club” format, where his review is not of the entire book, but of each chapter, spread out over several weeks. I’ve decided to do exactly this, to give others a chance to consider the chapters, but more to the point explain what I felt was of value regarding the chapters and what my takeaways were, in a way that a quick five sentence summary paragraph just can’t do.
Alan Page made the decision to feature the first book in this format easy when he gave me a copy of “How We Test Software at Microsoft” (Microsoft Press, 12/2008, 978-0-7356-2425-2), co-written with Ken Johnson and B.J. Rollinson. Since I promised Alan I’d give him a good and thorough review, I felt there was no better book to start the “TESTHEAD BOOK CLUB” than this one :).
Introduction:
Alan tells us why he decided to write this book and why he felt that another book about software testing was valuable. In addition, he describes the training that Microsoft gives its testers and makes a case that it’s the format of the training, with its stories, anecdotes and odd pieces of trivia that make it memorable and ultimately effective for its participants. In short, while the information is helpful, it’s the stories that are embedded with the techie stuff that makes it worthwhile.
In How We Test Software at Microsoft, Alan has separated the book into four themes. The first deals with Microsoft in general, and what makes it an interesting eco-system in regards to its practices with people and engineering. From there, a section would be dedicated to the actual methods of testing employed by Microsoft. The third section would cover the tools that are used to help make that testing a reality. The final section deals with what Alan and the authors see as the future of testing at Microsoft.
Alan makes the point that this book is not just his, and that Ken and B.J. have a significant impact on the content in the book. Each of their styles comes through in the book, and they all make their mark. Alan described B.J. as being the “professor” of the bunch. Ken is the “historian and storyteller”. Alan is the Agent Joe Friday of the group with the “just the fact, maam” attitude.
Alan also makes it clear from the outset that this book cannot possibly take every aspect of testing at a company the size of Microsoft and do it justice, not can he claim that everything in the book is to the letter what testers at Microsoft do all the time, but it represents a good cross section of the company and the way that they approach software testing, the methodologies they use, how they are applied, and what they feel works for them.
In addition, there is a web site specific to book and commentary related to it at http://www.hwtsam.com/, and there are, at the time of my writing this, 45 entries, so feel free to check the site out if you’d like to read more about the book and commentary related to it from the source.
Also, so as to keep everyone on the same page, there will be times where I’m going on what the book is saying, but it may not be the latest and greatest reality (it’s been two years since the book was published, so note in my review when I use the parenthetical “(AOP)” it means “As Of Publishing”).
Chapter 1: Software Engineering at Microsoft
This is a brief overview on the way that Software Engineering is conducted at Microsoft, and some of the key aspects that help to make Microsoft culture what it is. Microsoft has gone through changes in its approach and goals, starting with the vision of “a PC on every desk and in every home”, and then moving on to “Empowering people through great software – any time, any place, and on any device”. In 2002, the vision shifted "To enable people and businesses throughout the world to realize their full potential." At the time of the writing of the book, Microsoft’s vision statement was "Create experiences that combine the magic of software with the power of Internet services across a world of devices." This goes well beyond the original idea of a PC on every desk, and it shows Microsoft’s willingness to adapt to a changing marketplace (and for that matter, the necessity to do so).
I’ve only worked for one really large company in my career, (my definition of really large meaning a company with more that 10,000 people in it), and that would be Cisco Systems, though some could argue that Konami fits the bill as well with over 5000. YMMV. The idea of working for a company as immense as Microsoft honestly boggles my mind. There are three main product divisions in Microsoft (AOP):
- PSD Group (Platforms and Solutions)
- MDB Group (Software for Businesses)
- E & D Group (Entertainment)
Each of these divisions is reported to earn $10 billion to $20 billion in revenue, making each of these divisions larger that many Fortune 500 companies!
Alan makes the first of the diversion in the book to tell a story about T-shirts and the joke that “when a T-shirt has been made, it’s a sign that reorganization is in the works”. Having had the big company experience with Cisco Systems in the 1990’s, I can very much relate to this story. It also put a smile on my face and proved Alan’s point in the introduction… five years from now, I’m probably not going to remember the details of Microsoft’s organizational structure, but I’ll smile when I think of the T-shirt story, because it reminds me of my own experiences.
Think about the sheer numbers at Microsoft… they employ more than 90,000 people worldwide (AOP), and the point made about Microsoft’s size is their product breadth. More to the point, this breadth has unique engineering challenges that need to be met. Microsoft has applications, games, peripherals and devices in just about every conceivable software market, ranging from Datacenter Server Clustering software to Halo 3. Key to this is the understanding that there is no one way to build and ship products. Relevant to the tester, that also means that there is no one way to test their products, either. There is a set of guiding principles and ideas that they use, some broad enough to cover all product lines, some specific enough that they are only relevant to their local application.
Irada Sadyknhova shares an anecdote where shipping a product with Microsoft is a lot like producing a play. There are directors, producers, actors (the example likens the engineers to the actors), and while a performance of a play is about the art of the play, the end goal is to sell tickets to their performances. Likewise, Microsoft orchestrates large scale shipping product launches with one key goal in mind, to find an audience willing to use, become loyal to and pay for the experience.
“The key to the whole analogy is balancing the contradictions of a large software company that has to produce big profits with high margins and be dynamic and creative at the very same time. There is some inherent conflict between large production scale and creativity; balancing them successfully is a core of the success of Microsoft. No Google, or Apple, or Sun has quite yet mastered this challenge at scale. Only the likes of Cirque du Soleil and Microsoft have proven they can do it.”
—Irada Sadykhova (Director, Engineering Learning & Organization Effectiveness – Engineering Excellence)
Microsoft also champions the ideas of looking for that next new market or idea, realizing that not all of them are going to pan out. Many times, product enhancements that we see in the final version started out as incubation projects, often in very small groups as just an idea, and allowed to develop and grow in influence as the application or approach demonstrated that it had “legs” to stand on. The Bill Gates Think Week is another example, where White papers are presented and Bill Gates reads and comments on hem twice each year. Often these papers become incubation projects as well.
Microsoft employs 10 different types of engineering disciplines. They are as follows:
- Software Development Engineers in Test (SDETs) are responsible for maintaining the testing and QA standards.
- Software Development Engineers (SDEs) are the coders, i.e. the traditional Software developners that write the code that ultimately becomes Microsoft product.
- Program Management (PM) combines project management, product planning, and design together. The PM defines new product’s technical details and helps ensure it gets made.
- Operations (Ops) manages and maintains Microsoft online services and internal IT details.
- Usability and Design Usability Experience and Design (UX) focus on the visual end-user experience. Conducts research to see how the user interacts with the product, and then proposes improvements based on the feedback.
- Content creates the UI text, articles, training guides, magazine and web articles, books, Help files, etc.
- Creative describes positions most often associated with the Games group, working on new ideas for games and game play for PC and Xbox titles.
- Research focuses on publishing papers, and helping to allow new technologies to incubate and form.
- Localization International Project Engineering (IPE) focuses on translating from English to multiple languages (and from multiple languages to English, too), as well as adapting software to local market requirements and needs.
- Engineering Management run the teams that employ the other engineering disciplines.
Microsoft has development centers spread out all over the world, with a bulk of their engineering taking place in the U.S. (about 73% in and around Redmond, Washington, USA), but also including development offices in places like India, Ireland, United Kingdom, Denmark, Japan, China and Israel. They predict that a larger part of that development work will shift to other geographical areas, and that shift will increase in the coming years.
Coming up on Thursday, I will cover Chapter #2, which is about Software Test Engineers at Microsoft. Stay tuned until then :).
Monday, November 22, 2010
Lessons Learned From Weekend Testing
As we have now had two voyages of the Weekend Testers Americas cruise ship, I've had a chance to sit back and reflect a little bit. More to the point, I've had a chance to sit back and reflect on other people's reflecting :). For those who have shared their ideas, their insights and their opinions, thank you. We welcome your ideas and encourage you to keep sharing, as it helps us get better at doing this.
Here are a few details I've discovered in the two weeks that I've facilitated Weekend Testing Americas.
1. Testers Are Amazing
No, really, I'm not blowing smoke or currying favor with this comment, the testers that participate in these events really are some of the most giving, intelligent, helpful, witty and focused people I've encountered online. Now, it's entirely possible that, by the very nature of these events, that they are on weekends, that they are worldwide, and that they are peer promoted, encourages those people that already fit this profile to show up. Lazy people do not tend to give up their Saturday or Sunday to practice testing. Even with that, the caliber of the people that show up to these events makes me smile, and encourages me that there are some very bright and very giving people out there happy and willing to share their knowledge.
2. Not All Download Speeds are Equal
I live in the San Francisco Bay Area, one of the most technologically advanced areas on the planet, and it has a pretty darned good Internet infrastructure. I was able to download the applications I tested the past two weeks within 30 seconds, and install them within 5 minutes. I mistakenly assumed that my experience would be similar to most people, and that, given enough lead time, others would be able to download the apps in advance as well. I neglected to realize that those who were attending from other countries do not necessarily have the same infrastructure in place that I do, and subsequently, even with the advance notice, were not able to get the applications in time. Needless to say, I have developed a greater appreciation for testing web apps and smaller applications.
3. Being Wide Open With Missions/Charters Invites, Well, Chaos
My goal was to have more open charters. Why restrict people to a very narrow charter? They're professionals, give them something broad to explore. Well, after two weeks of charters that were fairly broad, I've come to appreciate the phrase "herding cats" a lot more (LOL!). It's not the testers' fault, it's mine. I ask for broad charter ideas and then wonder why I can't focus the conversation. Why should they, I gave them permission to roam all over the place, so of course their experiences are going to be all over the map, too! More narrow Charters are not a constraint... well, yes, they are, but I'm seeing them as a somewhat necessary one. It helps keep the testers on track and the presenter on message, and that's not a bad thing at all.
4. Weekend Testers is Less about Teaching and More About Self Discovery
Having come from teaching the Association for Software Testings BBST classes, I somewhat expected that I would be able to use the same approach for Weekend Testers. Well, some things, yes. The ideas and concepts, yes, but the delivery method is entirely different. In the AST classes, people have days to absorb and consider input, and it's done in an asynchronous messaging format. In WT, the time box is much shorter; one hour to test, one hour to discuss the testing and objective, and all of the conversation is happening in very close to real time via chat protocol. So the same techniques that work in one will not necessarily work in the other.
5. It's Really Important to Keep the Conversation Moving
This hasn't actually been an issue per se the past couple of sessions, but I think that has to do with the sheer number of participants. The first time we had 21 people participating, so there was barely a quiet moment. The second session had 13 participants, and likewise, not many quiet moments there, either. However, that still doesn't mean that everyone is participating or getting a chance to share ideas and what they have learned. Therefore it's important to give people a chance to answer directly to questions. Often the quiet person in the room may have an insight that's really great, and others will take off with that comment and expand on it, in ways the original commenter may not have considered (or the moderator for that matter ;) ).
To all of the testers who have been participating in these Weekend Testing activities, really, we appreciate your feedback immensely, and we are learning from each and every one of you, so please keep participating and please tell other testers about these events when we have them. We will be taking a break this week since it's the Thanksgiving holiday in the U.S., and a lot of people will be traveling (me included), but we have plans to get back together again on Saturday, December 4th. Once that is confirmed, we will let everyone know and we will give everyone a chance to get whatever they might need to participate. We have enjoyed these first few voyages, and we hope there will be many more to come :).
Here are a few details I've discovered in the two weeks that I've facilitated Weekend Testing Americas.
1. Testers Are Amazing
No, really, I'm not blowing smoke or currying favor with this comment, the testers that participate in these events really are some of the most giving, intelligent, helpful, witty and focused people I've encountered online. Now, it's entirely possible that, by the very nature of these events, that they are on weekends, that they are worldwide, and that they are peer promoted, encourages those people that already fit this profile to show up. Lazy people do not tend to give up their Saturday or Sunday to practice testing. Even with that, the caliber of the people that show up to these events makes me smile, and encourages me that there are some very bright and very giving people out there happy and willing to share their knowledge.
2. Not All Download Speeds are Equal
I live in the San Francisco Bay Area, one of the most technologically advanced areas on the planet, and it has a pretty darned good Internet infrastructure. I was able to download the applications I tested the past two weeks within 30 seconds, and install them within 5 minutes. I mistakenly assumed that my experience would be similar to most people, and that, given enough lead time, others would be able to download the apps in advance as well. I neglected to realize that those who were attending from other countries do not necessarily have the same infrastructure in place that I do, and subsequently, even with the advance notice, were not able to get the applications in time. Needless to say, I have developed a greater appreciation for testing web apps and smaller applications.
3. Being Wide Open With Missions/Charters Invites, Well, Chaos
My goal was to have more open charters. Why restrict people to a very narrow charter? They're professionals, give them something broad to explore. Well, after two weeks of charters that were fairly broad, I've come to appreciate the phrase "herding cats" a lot more (LOL!). It's not the testers' fault, it's mine. I ask for broad charter ideas and then wonder why I can't focus the conversation. Why should they, I gave them permission to roam all over the place, so of course their experiences are going to be all over the map, too! More narrow Charters are not a constraint... well, yes, they are, but I'm seeing them as a somewhat necessary one. It helps keep the testers on track and the presenter on message, and that's not a bad thing at all.
4. Weekend Testers is Less about Teaching and More About Self Discovery
Having come from teaching the Association for Software Testings BBST classes, I somewhat expected that I would be able to use the same approach for Weekend Testers. Well, some things, yes. The ideas and concepts, yes, but the delivery method is entirely different. In the AST classes, people have days to absorb and consider input, and it's done in an asynchronous messaging format. In WT, the time box is much shorter; one hour to test, one hour to discuss the testing and objective, and all of the conversation is happening in very close to real time via chat protocol. So the same techniques that work in one will not necessarily work in the other.
5. It's Really Important to Keep the Conversation Moving
This hasn't actually been an issue per se the past couple of sessions, but I think that has to do with the sheer number of participants. The first time we had 21 people participating, so there was barely a quiet moment. The second session had 13 participants, and likewise, not many quiet moments there, either. However, that still doesn't mean that everyone is participating or getting a chance to share ideas and what they have learned. Therefore it's important to give people a chance to answer directly to questions. Often the quiet person in the room may have an insight that's really great, and others will take off with that comment and expand on it, in ways the original commenter may not have considered (or the moderator for that matter ;) ).
To all of the testers who have been participating in these Weekend Testing activities, really, we appreciate your feedback immensely, and we are learning from each and every one of you, so please keep participating and please tell other testers about these events when we have them. We will be taking a break this week since it's the Thanksgiving holiday in the U.S., and a lot of people will be traveling (me included), but we have plans to get back together again on Saturday, December 4th. Once that is confirmed, we will let everyone know and we will give everyone a chance to get whatever they might need to participate. We have enjoyed these first few voyages, and we hope there will be many more to come :).
Friday, November 19, 2010
TWIST # 21 with Selena Delesie And Lynn McKee and TWiST-Plus with Alan Page
So what happens when two friends from AST and Twitter announce they are coming in from Arizona to be in San Francisco for two days due to the AST board meeting being held, after having both flown in from the AYE conference in Phoenix, and would like to get together with some friends in the area for dinner? If you’re normal, you recommend a restaurant, go have some food, talk, laugh and call it a night. Me? I’m not normal, so of course I asked if I could bring my DAT deck and a microphone and record an interview with them. The result is this week’s TWiST episode.
For today’s main TWiST, I get the chance to be in the driver’s seat as I interview two friends from the Great White North; Lynn McKee from Calgary and Selena Delesie from Waterloo. Selena has been on the show before, but this is Lynn’s first time being interviewed, so we had some fun with it. The restaurant we were in was a bit too loud and chaotic, so we found a spot in Embercadero Center that had tables on a walkway overpass up above Sacramento and Battery Streets. From there, the sounds of San Francisco at night made the backdrop of our interview (and occasionally interrupted it outright :) ). Anyway, click the following link to listen to Episode #20.
For this week’s TWiST-Plus, I’ve continued with audio from the Pacific Northwest Software Quality Conference. This week’s post is a quick interview I did with Alan Page, where we discuss performing code reviews from the perspective of a tester. It’s just a couple of minutes, but there’s some great insights in Alan’s answers. So for those interested, here’s the audio for “TWiST-Plus with Alan Page”.
Standard disclaimer:
Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.
TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.
Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).
Sunday, November 14, 2010
Reflection: Weekend Testers - Americas #1
Take 21 testers, one group Skype session, one open source application in beta state, and me trying to moderate the whole thing and what do you get? You get the first Weekend Testers session to take place in the Americas!
The session report and chat transcript can be seen here.
Setting up for this session was interesting. We had most of the technical details taken care of early on with regards to the application (accounts in bug repository, the method for setting up the group chat, etc.) but we were still wondering what would be a good first testing challenge. This really went all the way until Thursday, until I came across an application called StepMania. StepMania is a "clone" of the Dance Dance Revolution games, and can be run on PC, Mac and Windows. With this, I though, "OK, this could be a fun and interesting application". I had a feeling that many people would not be big fans of this, but it would be fun to see the interaction.
We had hoped we would have a good turnout, but I didn't expect to see 21 people attend! Nor did I expect to see so many people from all over the world attending. When I saw that we had four people initially sign in from India, wow... these were people willing to come in at midnight to participate in these testing sessions!
The goal of the session was two-fold. First, we wanted to encourage the group members to collaborate with each other, so we made that the primary mission of the testing session. This was a key takeaway that I learned from AST's BBST Foundations, and I wanted to see how it played out in real life. Sure enough, a number of the testers during the session became very focused on the bugs they found, but didn't emphasize the primary focus, which was to work with and learn how to test with their partner. The name of the session, Let's Dance, was meant to have a double meaning. First, the game itself is a dance related game, but the idea of having two people learn how to work together and share ideas was a "dance" in and of itself.
The general feedback was that people enjoyed the experience, even if they found some of the interaction to be frustrating. It was definitely a challenge to manage so many people at one time, and I frequently felt like the conversation was getting away from me. My thanks to Joe Harter for giving us some cover and helping direct questions and provide some "crowd control".
Our next session is scheduled for this coming Saturday, November 20th, at the same time (2:00 PM EST, 11:00 AM PST). We are still debating what challenge to do this time, and we encourage suggestions. What would you like to see us take on?
Friday, November 12, 2010
TWIST # 20 with Brett Schuchert and TWiST-Plus with Jonathan Bach
Yeah! My first “interview piece” has been posted as a TWiST-Plus.
As I mentioned a few weeks ago, we have been building up content that we want to share, but we are trying to limit the TWiST podcasts to a particular time limit and just once a week. The cool thing is that we have started producing extra audio features that we are referring to as “TWiST-Plus”, and these can be about any number of things, often related to testing, but they might run far afield or cover totally different things. Also, we are not limited to a specific time limit (they could be short sound bites or potentially full talks). Look for more of these in the coming weeks.
For today’s main TWiST episode, Matt talks with Brett Schuchert of ObjectMentor. Brett is a developer and consultant, and he shares some great comments about organizations becoming “test infected” and talks about how testers and developers can collaborate better and improve both disciplines. For those who want to check it out, here is Episode #20.
For this week’s TWiST-Plus, I’ve started to post the audio from the Pacific Northwest Software Quality Conference. My first post (and solo interview) is with Jonathan Bach, where we discuss taking a complex idea and bringing it down to simpler terms. So for those interested, here’s the audio for "Twist-Plus with Jonathan Bach".
Standard disclaimer:
Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it, so either head on over quickly (depending on when you see this) or consider upgrading to a Pro membership so that you can get to the podcasts and the entire library whenever you want to :). In addition, Pro membership allows you to access and download to the entire archive of Software Test and Quality Assurance Magazine, and its issues under its former name, Software Test and Performance.
TWiST-Plus is all extra material, and as such is not hosted behind STP’s site model. There is no limitation to accessing TWiST-Plus material, just click the link to download and listen.
Again, my thanks to STP for hosting the podcasts and storing the archive. We hope you enjoy listening to them as much as we enjoy making them :).
Wednesday, November 10, 2010
Setting Expectations and Driving Them
For those who use Blogger as their delivery platform, a few months ago they added a Stats tab so that people could see how many visitors and how many articles have been read and when. Because of this, I decided to add a new widget to my blog showing the Top Ten posts based on readership. As an experiment, I've been tracking the top five and seeing their movement.
Not surprising, some of my more recent entries have been performing well, but the top spot has been solidly for the past few weeks an article I wrote about pairing with a domain expert. I thought, wow, that must have been a pretty good entry (it has the benefit of being somewhat brief (LOL!) ), but then, as I was looking at the distribution of the stats, another thought came to me... is the fact that this post is in the number one spot because it's the best article on my site, or is the fact that it's being shown as the number one article driving its continued stay at the top?
I've only been keeping track of this stuff for a short time; I don't have a large volume of data to work with, so this may all be totally moot in a week or two. For now, it seems to be holding true. Also, it's entirely possible that there may be other aspects driving it to the top position, such as re-tweets, other blog posts mentioning it, etc. The key point though is that the traffic for the article increased after I put up the widget. It reminds me of when I was a kid and I'd get to see a copy of Rolling Stone and see what albums were in the Top 50 spots. Was I influenced to buy the number one album because it was the number one album? Sometimes yes. So in this case, an external indicator drew my attention and cause me to think "hey, maybe I should check this out."
So what does this have to do with testing? My point is that sometimes all it takes is a little change to an order, or a new way to display something, and people will respond in a different way and use the application differently just based on that one change. Our plans and our expectations need to change accordingly; just because we think we know the layout of our application and how it's used, just some simple under the covers changes can alter the playing field and bring to the fore things we might not consider. Likewise, just changing the order of a display can bring new activity to areas that may not have been considered or had little if any attention paid to them.
Not surprising, some of my more recent entries have been performing well, but the top spot has been solidly for the past few weeks an article I wrote about pairing with a domain expert. I thought, wow, that must have been a pretty good entry (it has the benefit of being somewhat brief (LOL!) ), but then, as I was looking at the distribution of the stats, another thought came to me... is the fact that this post is in the number one spot because it's the best article on my site, or is the fact that it's being shown as the number one article driving its continued stay at the top?
I've only been keeping track of this stuff for a short time; I don't have a large volume of data to work with, so this may all be totally moot in a week or two. For now, it seems to be holding true. Also, it's entirely possible that there may be other aspects driving it to the top position, such as re-tweets, other blog posts mentioning it, etc. The key point though is that the traffic for the article increased after I put up the widget. It reminds me of when I was a kid and I'd get to see a copy of Rolling Stone and see what albums were in the Top 50 spots. Was I influenced to buy the number one album because it was the number one album? Sometimes yes. So in this case, an external indicator drew my attention and cause me to think "hey, maybe I should check this out."
So what does this have to do with testing? My point is that sometimes all it takes is a little change to an order, or a new way to display something, and people will respond in a different way and use the application differently just based on that one change. Our plans and our expectations need to change accordingly; just because we think we know the layout of our application and how it's used, just some simple under the covers changes can alter the playing field and bring to the fore things we might not consider. Likewise, just changing the order of a display can bring new activity to areas that may not have been considered or had little if any attention paid to them.
Tuesday, November 9, 2010
Weekend Testers Americas (WTA) Preparing to GO LIVE This Saturday
So here's where we stand...
- We have a group dedicated to making the event happen.
- We have some ideas for topics to start out (we want to begin with some test technique ideas and put them to real world exercise).
- We have identified some applications that will be tested (which one we test will remain a secret until the session starts... don't want to give unfair advantage to anyone, and besides, that's the general M.O. of the Weekend Testers operations, let everyone start on an even playing field and discuss things going in with the same level of understanding, or lack thereof :) ).
- We have a time set for it to happen. We will begin at 2:00 PM EST (11:00 AM PST).
- We will use Skype to conduct the sessions (Skype ID is "weekendtestersamericas").
- We will use Twitter and the @WTAmericas name and #WTAmericas hashtag to announce start times and share announcements about the testing events.
We have all this, but what don't we have yet?
- We don't have a confirmation from the Home Office if they will be able to participate with us this go around (unless someone wants to work the night shift; 2:00 PM EST is 12:30 AM in Bangalore). I'll update this once we know for sure if they can help us facilitate this go around.
But most importantly...
- WE DON'T HAVE YOU!!!
In short, for this to work, we need participants! We need testers to come and play with us, confer with us, struggle with us, laugh with us, and learn with us. Make no mistake, none of us are gurus. Some of us have a lot of experience, and we've been around the block a few times, but that doesn't mean we are the wise ones and all shall sit at our feet and partake of our wisdom. That's not the point of this. The point is that we will all learn from each other, and we will all share insights and instincts that will help each other grow and develop into better testers. Beginning testers, you may be the one to offer more insights than those of us who've been doing this for years (think I'm kidding? Check out some of the experience reports on the Weekend Testing site, you'll see what I mean).
Here's everything you'll need to participate:
Weekend Testing – Americas Chapter Session No. 01
Date: Saturday, November 13, 2010
Time : 2:00 PM – 4:00 PM EST, 11:00 AM – 1:00 PM PST
(For time conversion follow this link:
http://timeanddate.com/worldclock/fixedtime.html?day=13&month=11&year=2010&hour=19&min=00&sec=0&p1=0.)
http://timeanddate.com/worldclock/fixedtime.html?day=13&month=11&year=2010&hour=19&min=00&sec=0&p1=0.)
Add “weekendtestersamericas” to your Skype contacts if you haven’t already. Please send an email to WTAmericas@gmail.com
with the subject WTA01.
with the subject WTA01.
Regards,
Joe / Michael / Lynn
Weekend Testing – Americas Chapter
For more details, contact WTAmericas@gmail.com
Joe / Michael / Lynn
Weekend Testing – Americas Chapter
For more details, contact WTAmericas@gmail.com
So come out and join us this weekend for our fledgling mission. We are excited, and we are looking forward to getting this "Americas Adventure" off the ground.
Monday, November 8, 2010
My First "Guest" Blog Post (For SmartBear) :)
For some people, this may not be a big deal, but I think it's pretty cool :).
I wrote a thing for SmartBear, and they posted it. It's just a simple user story about how I used TestComplete to drive an API tool.
There is a nice main point I made. Sometimes automation scares us because of the enormity of the task. When that's the case, step back and go small. Really small, in the case I used. Being able to string together many small "cuts" (small autonomous scripts) will often do more than trying to use a big weapon to slay a beast (i.e. a large framework of tests).
Anyway, my entry can be found here.
Standard Disclaimer: I do actually use TestComplete, and I think it's a pretty cool commercial tool. I received no payment or perks from posting my blog entry there, just so y'all know :).
Saturday, November 6, 2010
"Jamming" for Testing Jobs
I had an interesting conversation a few days ago, in which I was asked how to determine if a tester really has what it takes. It's a tough question, because it's subjective. Does the tester have what it takes to do what?
To test?
To lead?
To work within a particular organization?
To follow a particular testplan?
To write scripts a particular way?
To test with an exploratory mindset?
To understand the value of context-driven testing?
All of these are important and, as I said previously, subjective; they have different weight with different people. I answered that I didn't believe a resume or even a sit-down interview would realistically help answer these question. Perhaps a testing challenge and writing down observations might, but even then, I think the information would be somewhat limited. What have we really learned about the tester, and how would they really perform?
This caused us to go off on a tangent about how we describe testers, and I used the analogy of the "Rock Star" in music. There was a forum that I participated in a number of years ago (dedicated to talking about music) and those that participated frequently got a ranking. The rankings were based on the old gags about where in the "band" process you were:
Practicing in the bedroom.
Jamming in the garage.
Playing at a backyard barbecue.
Playing a jam night in a bar.
Performing with a band in a nightclub.
Rehearsing in studio.
Headlining clubs.
Touring in a van.
Signed and recording first album.
Playing small sheds.
Touring in bus.
Headlining small sheds.
Playing arenas.
Headlining arenas.
Playing in stadiums.
Touring in jet.
Headlining stadiums.
You get the point. We jokingly put various testers we were familiar with on this "rock star" continuum. The true "rock star" testers? Hmmm... I'll pick on James Bach and Cem Kaner because I think they've both reached this stage... they are the rock stars that tour in jets and headline stadiums. They've also worked for decades to get to that position. Other testers fall on different levels of the continuum. As we joked about where I saw myself, I said I considered myself a touring club headliner as a tester. Not yet a rock star, but quite a ways from first practicing in the bedroom or the garage, for sure.
After I had a chance to think about it, I realized something. When I joined a band, I didn't interview in the classic sense. I tried out. I went for auditions, and sat with bands in their studios (or garages, or sometimes even bedrooms) and jammed with them. They played, I sang. After a couple of hours of hanging out, sharing song ideas, jamming covers, working on harmonies, we knew if we would have good chemistry or not. Wouldn't that be a great way to go about hiring testers, or developers, or, heck, anyone?!
The interview is artificial; I can tell you lots of things about me, but going back to the band scenario, when I try out, it's all or nothing, I either sing in a way that works for the band, or I don't. I had several experiences where I auditioned and even worked with bands for a period of time, only to have them determine I didn't work right for them (it could have been tone, it could have been that I was too tall, it could have been that they had a biker image and I seemed a little more glam by comparison). Put into perspectives of teams for testing, if you have a tester that's been specifically ISO 9001 focused, it's likely they'll have a challenging time mixing with an XP team. It's not a foregone conclusion they won't perform well, but it's important to understand the differences and see how they will interact. It would be like me, a hard rock vocalist, trying out to be a back-up singer for Kate Bush. It would be ridiculous, if I wasn't willing to modify my approach. Even in that light, would I be more comfortable providing cool ethereal backing vocals for Kate's songs, or would I be more comfortable rocking out?
So here's a thought for those looking to hire testers... consider having a "bug night". Provide pizza, have a collegiate atmosphere, and go after a challenge. Invite some people who are candidates to participate. Jam a while with them. Perhaps work on something that would be unrelated to your product (look at some Weekend Testing examples if you'd like). this way, you can determine if the tester that looks great on paper actually performs the way you hope they will. Testers, consider these opportunities to "audition". You get the chance to see what other teams work like in their natural habitat, and if you may not be a good fit, well, that's OK, you learn that, and you may learn a lot more from the experience as well.
In the end, we could all learn a few new riffs or two, eh :)?
To test?
To lead?
To work within a particular organization?
To follow a particular testplan?
To write scripts a particular way?
To test with an exploratory mindset?
To understand the value of context-driven testing?
All of these are important and, as I said previously, subjective; they have different weight with different people. I answered that I didn't believe a resume or even a sit-down interview would realistically help answer these question. Perhaps a testing challenge and writing down observations might, but even then, I think the information would be somewhat limited. What have we really learned about the tester, and how would they really perform?
This caused us to go off on a tangent about how we describe testers, and I used the analogy of the "Rock Star" in music. There was a forum that I participated in a number of years ago (dedicated to talking about music) and those that participated frequently got a ranking. The rankings were based on the old gags about where in the "band" process you were:
Practicing in the bedroom.
Jamming in the garage.
Playing at a backyard barbecue.
Playing a jam night in a bar.
Performing with a band in a nightclub.
Rehearsing in studio.
Headlining clubs.
Touring in a van.
Signed and recording first album.
Playing small sheds.
Touring in bus.
Headlining small sheds.
Playing arenas.
Headlining arenas.
Playing in stadiums.
Touring in jet.
Headlining stadiums.
You get the point. We jokingly put various testers we were familiar with on this "rock star" continuum. The true "rock star" testers? Hmmm... I'll pick on James Bach and Cem Kaner because I think they've both reached this stage... they are the rock stars that tour in jets and headline stadiums. They've also worked for decades to get to that position. Other testers fall on different levels of the continuum. As we joked about where I saw myself, I said I considered myself a touring club headliner as a tester. Not yet a rock star, but quite a ways from first practicing in the bedroom or the garage, for sure.
After I had a chance to think about it, I realized something. When I joined a band, I didn't interview in the classic sense. I tried out. I went for auditions, and sat with bands in their studios (or garages, or sometimes even bedrooms) and jammed with them. They played, I sang. After a couple of hours of hanging out, sharing song ideas, jamming covers, working on harmonies, we knew if we would have good chemistry or not. Wouldn't that be a great way to go about hiring testers, or developers, or, heck, anyone?!
The interview is artificial; I can tell you lots of things about me, but going back to the band scenario, when I try out, it's all or nothing, I either sing in a way that works for the band, or I don't. I had several experiences where I auditioned and even worked with bands for a period of time, only to have them determine I didn't work right for them (it could have been tone, it could have been that I was too tall, it could have been that they had a biker image and I seemed a little more glam by comparison). Put into perspectives of teams for testing, if you have a tester that's been specifically ISO 9001 focused, it's likely they'll have a challenging time mixing with an XP team. It's not a foregone conclusion they won't perform well, but it's important to understand the differences and see how they will interact. It would be like me, a hard rock vocalist, trying out to be a back-up singer for Kate Bush. It would be ridiculous, if I wasn't willing to modify my approach. Even in that light, would I be more comfortable providing cool ethereal backing vocals for Kate's songs, or would I be more comfortable rocking out?
So here's a thought for those looking to hire testers... consider having a "bug night". Provide pizza, have a collegiate atmosphere, and go after a challenge. Invite some people who are candidates to participate. Jam a while with them. Perhaps work on something that would be unrelated to your product (look at some Weekend Testing examples if you'd like). this way, you can determine if the tester that looks great on paper actually performs the way you hope they will. Testers, consider these opportunities to "audition". You get the chance to see what other teams work like in their natural habitat, and if you may not be a good fit, well, that's OK, you learn that, and you may learn a lot more from the experience as well.
In the end, we could all learn a few new riffs or two, eh :)?
Subscribe to:
Posts (Atom)