As many of you know I don't do direct advertising on TESTHEAD. At times, though, I do give plugs for things my friends are doing or things I think would be noteworthy and interesting. This would be one of those plugs :).
The folks at QASymphony are having Matt Heusser present a webinar on Test Estimation Hacks, April 21, 2016 from 2:00 p.m. - 3:00 p.m. Eastern time.
First off, this is not a sales presentation. Matt is doing this based on concepts and practical approaches that are tool agnostic. You’ll leave with a half-dozen ways to think about estimating, from “can you finish it by lunch?” to full blown new product development and planning. The techniques themselves can be applied by anyone, and Matt will explore why coming up with time estimates in testing can be so counter intuitive — and what to do about it. It's not an endorsement of QASymphony or a plug for them, either. They are hosting the webinar.
Matt is a thorough presenter, he's engaging, and he does his homework. Anyone who has listened to the "TWiST" podcast or "The Testing Show" knows what I'm talking about.
In short, if you can attend, do so. If you can tell others about it, do that, too :).
Friday, April 8, 2016
Thursday, April 7, 2016
Automation Challenges: Live from #STPCON
One of the things Paul Grizzaffi said as a lead in to his Automation Challenges Roundtable was "this will *NOT* be covering how to use Selenium!" With that, I was sold for where to spend my last session.
What's nice about this session is that it's not a presentation per se, it's a group hug, in the best and nicest way I can say that. Since this is a open mic flow, we'll take it as a question at a time approach:
How Do I Communicate the Value of Our Automation?
I'm fortunate in that I work for a company that has a robust and long serving automation platform. IT's not very modern, and its not always pretty, but it works well and has for a long time. Most of the tests pass most of the time, and truth be told, there have been a few times where I wondered if our tests were really valuable... until we actually caught regressions that saved our butts. To that end, having the automation in place was tremendously valuable, so I'm not sure what I would suggest beyond making sure that you report your results regularly, highlight where the tests helped with finding issues, and encouraging continued work on automating new features as they come.
Different Personalities in the Automation Pipeline
It helps to have people with different experiences and viewpoints, but that opens up some additional challenges. People develop in unique ways, and different people work in ways that may or may not be optimal for others. Short of isolating everyone and having them work alone (i.e. what's the point?), encourage each member to communicate how they like to work. We may not be able to resolve everything, but helping each other understand our best approaches will certainly make it smoother.
Requirements, BDD and Code in One
The desire to have business requirements, BDD language and code in one place is doable, but it's tricky. Test Rail is an approach that can front end the business language, yet still minimize the duplication. Bigger question is "how can we get the group to work together with a minimum of duplication?" The unfortunate truth is that the more specific we make something, the less flexible it becomes. Seems a good potential product :).
Helping Teams Not Go Rogue
Everyone has their favorite framework or methodology. What happens when there are several tools in use and different technological requirements? One suggestion was that everyone knew how to run the tests, regardless of the tools required. Spreading the knowledge helps encourage looking for synergies. We may not always be able to pick a unifier, but over time, learning from each group can help either draw projects together or at least determine which areas must be separate for technical reasons.
---
...and with that, we bid adieu to STP-CON. Thank you all for a terrific week, and thanks for bringing the conference to me this time :).
What's nice about this session is that it's not a presentation per se, it's a group hug, in the best and nicest way I can say that. Since this is a open mic flow, we'll take it as a question at a time approach:
How Do I Communicate the Value of Our Automation?
I'm fortunate in that I work for a company that has a robust and long serving automation platform. IT's not very modern, and its not always pretty, but it works well and has for a long time. Most of the tests pass most of the time, and truth be told, there have been a few times where I wondered if our tests were really valuable... until we actually caught regressions that saved our butts. To that end, having the automation in place was tremendously valuable, so I'm not sure what I would suggest beyond making sure that you report your results regularly, highlight where the tests helped with finding issues, and encouraging continued work on automating new features as they come.
Different Personalities in the Automation Pipeline
It helps to have people with different experiences and viewpoints, but that opens up some additional challenges. People develop in unique ways, and different people work in ways that may or may not be optimal for others. Short of isolating everyone and having them work alone (i.e. what's the point?), encourage each member to communicate how they like to work. We may not be able to resolve everything, but helping each other understand our best approaches will certainly make it smoother.
Requirements, BDD and Code in One
The desire to have business requirements, BDD language and code in one place is doable, but it's tricky. Test Rail is an approach that can front end the business language, yet still minimize the duplication. Bigger question is "how can we get the group to work together with a minimum of duplication?" The unfortunate truth is that the more specific we make something, the less flexible it becomes. Seems a good potential product :).
Helping Teams Not Go Rogue
Everyone has their favorite framework or methodology. What happens when there are several tools in use and different technological requirements? One suggestion was that everyone knew how to run the tests, regardless of the tools required. Spreading the knowledge helps encourage looking for synergies. We may not always be able to pick a unifier, but over time, learning from each group can help either draw projects together or at least determine which areas must be separate for technical reasons.
---
...and with that, we bid adieu to STP-CON. Thank you all for a terrific week, and thanks for bringing the conference to me this time :).
Become an Influential Tester: Live from #STPCON
When we get right down to it, we are all leaders at some point. It's situational, and sometimes we don't have any choice in the matter. Leadership may be thrust upon us, but influence is entirely earned. I would argue that being influential is more important than being a leader. Jane Fraser, I think, feels the same way :).
Influence is not force, it's not manipulation, and it's not gamesmanship. Sadly, many people feel that that is what influence is. Certainly we can use those attributes to gain influence, especially if we hold power over others (titles are somewhat irrelevant in real influence. People who respect you will walk though fire for you regardless of your title. People who don't respect you may put forward a brave face and pay lip service, but behind your back, they will undermine you (or at the most benign level just ignore you).
So how can we get beyond the counterfeit influence? I'm going to borrow a Coveyism (meaning I'm stealing from Steven Covey, he of the "7 Habits of Highly Effective People" fame). To have genuine influence, we need to build "emotional capital" with our associates and peers. That emotional capital needs to be developed at all levels, not just with upper management. One sure way to develop emotional capital is exchange information and experiences. In my world view, that means being willing to not be the sole proprietor of my job. IF I know how to do something, I try to make HOWTO docs that spell out how I do it. Some might think that would be a detriment, i.e. where's the job security? Fact is, my job is not secure without emotional capital, and sharing my knowledge develops that capital. It also frees me up to explore other areas, safe in the knowledge that I am not the silo, you do not need me to do that thing, because i've shown many others how to do it.
Persuasion is a fine art, and it's one that is often honed with many experiences of not being persuasive. Persuasion is, in my opinion, much easier when you are dispassionate an objective, rather than passionate and enraged. Making a clear case as to why you should be listened to takes practice and experience, and developing a track record of knowing what you need to do. Over time, another aspect of persuasion gets developed, and that is respect. Respect is earned, often over long periods of time, and unfortunately, it's one of those resources that can be wiped out in a second.
Influence often comes down to timing. Unless the building is on fire, in most cases, timing and understanding when people are able to deal with what you need to present and persuade about helps considerably.
Over time, you are able to develop trust, and that trust is the true measure of your emotional capital. If your team trusts you, if you have made the investments in that emotional capital, they will go to bat for you, because you have proven to be worth that trust. Like respect, it's a hard earned resource, but it can be erased quickly if you fall short or prove to not be trustworthy.
Being able to work with the team and help the team as a whole move forward shows to the other members of the team that you are worthy of respect, trust and that you deserve the influence you hope to achieve. Note: leadership is not required here. You do not need to be a leader to have influence. in fact, as was shown in an earlier talk today, the leader or first adopter has less influence than the first follower does. That first follower is showing that they have faith in the first person's objective, and by putting themselves in the position of first follower, they are the ones that influence more people to sign on or get behind an initiative.
A key component to all of this is integrity. It may not be easy, you may not always be popular, you may wish to do anything else, but if you keep true to your word, your principles and you own up to shortcomings or mistakes without shifting blame, you demonstrate integrity, and that integrity, if not always outwardly praised, is internally valued.
Active listening is a huge piece of this. To borrow from Covey again, "seek first to understand, before you try to be understood". It's often hard to do this. We want to be right, and often, we put our emphasis on being right rather than being informed. We all do this at some point. Ask yourself "are you listening to what the speaker says, or are you too busy rehearsing the next thing you want to say?" If it's the former, good job. If it's the latter... might want to work on that ;).
Ultimately, influence comes down to reliability and dependability. People's perception of both of those is entirely dependent on your emotional capital reserves. You are not reliable and dependable, you demonstrate reliability and dependability. Over time, people perceive you as being reliable and dependable, and the relationships you foster help determine how deep that perception is.
Influence is not force, it's not manipulation, and it's not gamesmanship. Sadly, many people feel that that is what influence is. Certainly we can use those attributes to gain influence, especially if we hold power over others (titles are somewhat irrelevant in real influence. People who respect you will walk though fire for you regardless of your title. People who don't respect you may put forward a brave face and pay lip service, but behind your back, they will undermine you (or at the most benign level just ignore you).
So how can we get beyond the counterfeit influence? I'm going to borrow a Coveyism (meaning I'm stealing from Steven Covey, he of the "7 Habits of Highly Effective People" fame). To have genuine influence, we need to build "emotional capital" with our associates and peers. That emotional capital needs to be developed at all levels, not just with upper management. One sure way to develop emotional capital is exchange information and experiences. In my world view, that means being willing to not be the sole proprietor of my job. IF I know how to do something, I try to make HOWTO docs that spell out how I do it. Some might think that would be a detriment, i.e. where's the job security? Fact is, my job is not secure without emotional capital, and sharing my knowledge develops that capital. It also frees me up to explore other areas, safe in the knowledge that I am not the silo, you do not need me to do that thing, because i've shown many others how to do it.
Persuasion is a fine art, and it's one that is often honed with many experiences of not being persuasive. Persuasion is, in my opinion, much easier when you are dispassionate an objective, rather than passionate and enraged. Making a clear case as to why you should be listened to takes practice and experience, and developing a track record of knowing what you need to do. Over time, another aspect of persuasion gets developed, and that is respect. Respect is earned, often over long periods of time, and unfortunately, it's one of those resources that can be wiped out in a second.
Influence often comes down to timing. Unless the building is on fire, in most cases, timing and understanding when people are able to deal with what you need to present and persuade about helps considerably.
Over time, you are able to develop trust, and that trust is the true measure of your emotional capital. If your team trusts you, if you have made the investments in that emotional capital, they will go to bat for you, because you have proven to be worth that trust. Like respect, it's a hard earned resource, but it can be erased quickly if you fall short or prove to not be trustworthy.
Being able to work with the team and help the team as a whole move forward shows to the other members of the team that you are worthy of respect, trust and that you deserve the influence you hope to achieve. Note: leadership is not required here. You do not need to be a leader to have influence. in fact, as was shown in an earlier talk today, the leader or first adopter has less influence than the first follower does. That first follower is showing that they have faith in the first person's objective, and by putting themselves in the position of first follower, they are the ones that influence more people to sign on or get behind an initiative.
A key component to all of this is integrity. It may not be easy, you may not always be popular, you may wish to do anything else, but if you keep true to your word, your principles and you own up to shortcomings or mistakes without shifting blame, you demonstrate integrity, and that integrity, if not always outwardly praised, is internally valued.
Active listening is a huge piece of this. To borrow from Covey again, "seek first to understand, before you try to be understood". It's often hard to do this. We want to be right, and often, we put our emphasis on being right rather than being informed. We all do this at some point. Ask yourself "are you listening to what the speaker says, or are you too busy rehearsing the next thing you want to say?" If it's the former, good job. If it's the latter... might want to work on that ;).
Ultimately, influence comes down to reliability and dependability. People's perception of both of those is entirely dependent on your emotional capital reserves. You are not reliable and dependable, you demonstrate reliability and dependability. Over time, people perceive you as being reliable and dependable, and the relationships you foster help determine how deep that perception is.
Party Down with MS DevOps: Live from #STPCON
Anarka Fairchild is an engineer with Microsoft, and she's our lunch time keynote. Her talk is squarely focused on the emergence of DevOps, and how that term is both abused and misused, yet still a goal that many want to achieve.
From the Microsoft perspective (did I mention Anarka is with Microsoft? OK, now I have ;) ), DevOps starts with a solid Application Lifecycle Management Plan. It's not just technology and toolsets, but a mindset and cooperative approach to everything in the delivery pipeline for the business. Their approach starts with planning, and that planning should include development and test from the get go (yes, I can get behind this!).
Next, development and test link up and work to develop and proof the code in concert, with an emphasis on unit tests. Cross platform build agents are more important than ever, so Microsoft is leveraging the ability to build on many environments (Windows, Linux, iPhone, Android, etc.). Next up is release and deployment and Anarka walked us through the approach Microsoft uses to manage the release and deployment process. Finally, the end step (not really the end, but the start of the loop back) is Monitor and Learn. By evaluating usage stats, analytics, and taking advantage of reporting tools, we can look at all of these details and bring us back to "Do".
So what should we consider when we are building modern apps? Three areas that Anarka identified were Quality Enablement, Agile Planning and Developer Operations. Typically, QA has historically been left to the end of the development life cycle. I don't need to repeat that this is inefficient (not to mention kinda' dangerous) in that we find bugs too late. Microsoft looks to be aiming to make quality enablement a central tenant of their development process.
Conceptually, this all sounds pretty interesting, but for me personally, due to the fact that I live in a Linux development world, much of the presentation is too product specific. If it seems like I'm being slim on details, it's because there's a lot of Microsoft specific componentry being discussed. Having said that, I do like the fact that there is an emphasis on making tools and capabilities better for us plebes :). For those who work in Microsoft shops, it sounds like there a lot to play with for y'all :).
From the Microsoft perspective (did I mention Anarka is with Microsoft? OK, now I have ;) ), DevOps starts with a solid Application Lifecycle Management Plan. It's not just technology and toolsets, but a mindset and cooperative approach to everything in the delivery pipeline for the business. Their approach starts with planning, and that planning should include development and test from the get go (yes, I can get behind this!).
Next, development and test link up and work to develop and proof the code in concert, with an emphasis on unit tests. Cross platform build agents are more important than ever, so Microsoft is leveraging the ability to build on many environments (Windows, Linux, iPhone, Android, etc.). Next up is release and deployment and Anarka walked us through the approach Microsoft uses to manage the release and deployment process. Finally, the end step (not really the end, but the start of the loop back) is Monitor and Learn. By evaluating usage stats, analytics, and taking advantage of reporting tools, we can look at all of these details and bring us back to "Do".
So what should we consider when we are building modern apps? Three areas that Anarka identified were Quality Enablement, Agile Planning and Developer Operations. Typically, QA has historically been left to the end of the development life cycle. I don't need to repeat that this is inefficient (not to mention kinda' dangerous) in that we find bugs too late. Microsoft looks to be aiming to make quality enablement a central tenant of their development process.
Conceptually, this all sounds pretty interesting, but for me personally, due to the fact that I live in a Linux development world, much of the presentation is too product specific. If it seems like I'm being slim on details, it's because there's a lot of Microsoft specific componentry being discussed. Having said that, I do like the fact that there is an emphasis on making tools and capabilities better for us plebes :). For those who work in Microsoft shops, it sounds like there a lot to play with for y'all :).
Visual Testing: Live from #STPCON
Mike Lyles is covering something I like to talk and present on, so I felt it would be good to get another perspective. Visual Testing is important because, in many ways, we need to use our eyes to validate something an automated test cannot, and yet, our own eyes are untrustworthy. We fill in a lot of the details that we expect to see. We are remarkably good at inferring what we think we should be seeing, so much so that we can miss what would be obvious (or at least, what we think should be obvious on closer inspection).
Assumptions come from one place; the brain will sacrifice facts for efficiency. This is why we can be tricked into thinking we are seeing one thing when the full image shows we are looking at something else. On the bright side, the brain is constantly evolving (thank goodness the "your brain is locked at 25" idea has been debunked). More relevant is that the brain does not want to pay attention to boring things. We get conned into attempting to multi-task because we crave novelty, yet the brain can't really multi-task, so we make shortcuts to help us deal with what's in front of us. In short, we become more fallible the more novelty we crave. Additionally, the more stress we deal with, the more pronounced this cycle becomes.
One of the things I often suggest, and I learned this from James Lyndsay a few years back, is the need to repeatedly focus and defocus. We need to focus to see patterns, but we can often become so fixated that we miss other patterns that are clearly in front of us. Fans of the game SET are probably familiar with this, especially when the pattern is picked by someone else. With stress, the ability to focus on what's in front of you diminishes, but if you practice working with the stress, you can overcome your diminishing faculties.
Other senses can help us remember things, but what we see is usually more memorable than anything we experience in any other way. Mike gave two pieces of paper to two participants, telling the same story, but one was just words, and another had pictures. The person with the annotated pictures remembered a lot more of the details of the story.
People are naturally curious, and little children especially so. Over time, we tend to tamp down on that natural curiosity in favor of efficiency and performance. Good testers will work to try to awaken that natural curiosity, and they will try to open themselves up to approaching problems from a variety of angles.
We often approach systems with a pre-conceived notion of what they should do and where they should be placed. There's a comfort in standardization, but that can also led us astray when an item is designed intentionally to thwart that expectation. In many cases, we interact with the device the way we expect to, not how it is actually configured. While I may say it is overkill to look at every device with fresh eyes, it does make for an interesting experiment to try to put aside the expectations we have and try to look at it as a brand new object.
One interesting example Mike used was to pull out a jar of gum balls and ask us how many there were. Some people set low numbers (I guessed three hundred), some guessed high (close to 1000) but as we recorded more guesses and then took the average, the average was within ten of the actual count. In short. a collection of observations and comments gets us pretty close to the reality, whereas one of us might be wildly off, based on our limited visibility and understanding. In some cases, the crowd sees better than the individual.
Good thing to remember is that what we think we are looking at may not be at all what we are looking at. Plan accordingly ;).
Assumptions come from one place; the brain will sacrifice facts for efficiency. This is why we can be tricked into thinking we are seeing one thing when the full image shows we are looking at something else. On the bright side, the brain is constantly evolving (thank goodness the "your brain is locked at 25" idea has been debunked). More relevant is that the brain does not want to pay attention to boring things. We get conned into attempting to multi-task because we crave novelty, yet the brain can't really multi-task, so we make shortcuts to help us deal with what's in front of us. In short, we become more fallible the more novelty we crave. Additionally, the more stress we deal with, the more pronounced this cycle becomes.
One of the things I often suggest, and I learned this from James Lyndsay a few years back, is the need to repeatedly focus and defocus. We need to focus to see patterns, but we can often become so fixated that we miss other patterns that are clearly in front of us. Fans of the game SET are probably familiar with this, especially when the pattern is picked by someone else. With stress, the ability to focus on what's in front of you diminishes, but if you practice working with the stress, you can overcome your diminishing faculties.
Other senses can help us remember things, but what we see is usually more memorable than anything we experience in any other way. Mike gave two pieces of paper to two participants, telling the same story, but one was just words, and another had pictures. The person with the annotated pictures remembered a lot more of the details of the story.
People are naturally curious, and little children especially so. Over time, we tend to tamp down on that natural curiosity in favor of efficiency and performance. Good testers will work to try to awaken that natural curiosity, and they will try to open themselves up to approaching problems from a variety of angles.
We often approach systems with a pre-conceived notion of what they should do and where they should be placed. There's a comfort in standardization, but that can also led us astray when an item is designed intentionally to thwart that expectation. In many cases, we interact with the device the way we expect to, not how it is actually configured. While I may say it is overkill to look at every device with fresh eyes, it does make for an interesting experiment to try to put aside the expectations we have and try to look at it as a brand new object.
One interesting example Mike used was to pull out a jar of gum balls and ask us how many there were. Some people set low numbers (I guessed three hundred), some guessed high (close to 1000) but as we recorded more guesses and then took the average, the average was within ten of the actual count. In short. a collection of observations and comments gets us pretty close to the reality, whereas one of us might be wildly off, based on our limited visibility and understanding. In some cases, the crowd sees better than the individual.
Good thing to remember is that what we think we are looking at may not be at all what we are looking at. Plan accordingly ;).
Leading Change from the Test Team: Live at #STPCON
http://bit.ly/JRChange
Think of a village next to a river. Early in the morning, someone is hear drowning in the river. A person runs in to save that drowning person. Later, another two people are coming down the river, drowning, and two people run in to save the drowning people. A little leter, five people come down the river, drowning. This time, our hero gets dressed and starts running upstream. As the village elders yell and ask what he is doing, he responds "I'm going upstream to see who or what is throwing people into the river".
This is a cute and yet good reminder that, at times, we need to stop dealing with the immediate crisis to see where the root problems lie. Another common joke is "when you are up to you rear in alligators, we often forget the job is to drain the swamp."
John Ruberto points out that, sadly, most change efforts fail. The primary reasons are that we often fail to build a good case as to why the change is needed. There's a model called "the change curve" where we start with denial, put up resistance, then embark on exploration, and finally we commit to the change.
Making change needs two important things. First, it needs to have an initial leader, but just as important is a first follower. When a first follower steps in and participates, and most important, the leader embraces and accepts them, that encourages others to do so as well. Over time, those not participating will start to because their non-participation will be very visible, and now not following is the strange course. In other words, the most important leadership role is not the actual leader, but being the first follower.
First and foremost, we need to build the case for change. What's the need? Where can we get supporting data for the need for change? What would happen if we didn't implement it? What is the urgency? Pro tip: the higher the urgency, the more notice and attention it will get. However, often the most important changes needed don't rise to the level of most urgent. However, if left untreated, something important can certainly rise to most urgent (especially if the time left untreated results in a catastrophic bug getting out in the wild).
Next, we need to communicate in the language of our intended audience (as well as those who might not be in our immediate audience, since they may have influence on direction). Ideas need to map to a vision. Features need to communicate benefits. We do pretty well on the former, but we could use some help with the latter. In short, don't communicate the what and not communicate the WHY!
We can communicate the needs, we can speak to the value, but we need to validate the hypothesis as well. That means "Scientific Method" and experiments to confirm or dispute our hypothesis. It's important to remember that the Scientific Method is not a one and done, it's a continuous cycle, especially when we hope to make changes that will work and, more important, stick. Don't give up just because your first hypothesis doesn't hold up. Does it mean your whole premise is wrong, or does it mean you may need to refine your hypothesis? We won't know until we try, and we won't know for sure if we don't genuinely experiment, perhaps multiple times.
Next, we have to rollout our change, and observe. And be ready to adapt and adjust. John used a cool image from Sweden in 1967 when the switch from driving on the left side of the road to the right went into law. Even with all the testing and experimentation, there was still some chaos in the initial implementation, and it took some time to adjust and resolve the issues that resulted. For us, we need to be open to feedback and ask, consistently "how can we improve this?"
We of course need to show progress. If everything has gone according to plan thus far has worked, but we are not showing progress, we may need to be patient to ensure we have adopted the change, but at some point, we need to objectively evaluate if our efforts and changes are really valid. it's of course possible that we could hit all cylinders and adopt a change that doesn't really achieve what we hoped. Does that mean the change was irrelevant? Possibly, but it may also mean that, again, we need to adjust our hypothesis. Typically though, sticky changes have to show progress worthy of the change.
In short, remember to be in love with the problem, and try to address the problem. Don't be too married to any solution that doesn't really accomplish the goal of solving the problem. Good goal to shoot for :).
Think of a village next to a river. Early in the morning, someone is hear drowning in the river. A person runs in to save that drowning person. Later, another two people are coming down the river, drowning, and two people run in to save the drowning people. A little leter, five people come down the river, drowning. This time, our hero gets dressed and starts running upstream. As the village elders yell and ask what he is doing, he responds "I'm going upstream to see who or what is throwing people into the river".
This is a cute and yet good reminder that, at times, we need to stop dealing with the immediate crisis to see where the root problems lie. Another common joke is "when you are up to you rear in alligators, we often forget the job is to drain the swamp."
John Ruberto points out that, sadly, most change efforts fail. The primary reasons are that we often fail to build a good case as to why the change is needed. There's a model called "the change curve" where we start with denial, put up resistance, then embark on exploration, and finally we commit to the change.
Making change needs two important things. First, it needs to have an initial leader, but just as important is a first follower. When a first follower steps in and participates, and most important, the leader embraces and accepts them, that encourages others to do so as well. Over time, those not participating will start to because their non-participation will be very visible, and now not following is the strange course. In other words, the most important leadership role is not the actual leader, but being the first follower.
First and foremost, we need to build the case for change. What's the need? Where can we get supporting data for the need for change? What would happen if we didn't implement it? What is the urgency? Pro tip: the higher the urgency, the more notice and attention it will get. However, often the most important changes needed don't rise to the level of most urgent. However, if left untreated, something important can certainly rise to most urgent (especially if the time left untreated results in a catastrophic bug getting out in the wild).
Next, we need to communicate in the language of our intended audience (as well as those who might not be in our immediate audience, since they may have influence on direction). Ideas need to map to a vision. Features need to communicate benefits. We do pretty well on the former, but we could use some help with the latter. In short, don't communicate the what and not communicate the WHY!
We can communicate the needs, we can speak to the value, but we need to validate the hypothesis as well. That means "Scientific Method" and experiments to confirm or dispute our hypothesis. It's important to remember that the Scientific Method is not a one and done, it's a continuous cycle, especially when we hope to make changes that will work and, more important, stick. Don't give up just because your first hypothesis doesn't hold up. Does it mean your whole premise is wrong, or does it mean you may need to refine your hypothesis? We won't know until we try, and we won't know for sure if we don't genuinely experiment, perhaps multiple times.
Next, we have to rollout our change, and observe. And be ready to adapt and adjust. John used a cool image from Sweden in 1967 when the switch from driving on the left side of the road to the right went into law. Even with all the testing and experimentation, there was still some chaos in the initial implementation, and it took some time to adjust and resolve the issues that resulted. For us, we need to be open to feedback and ask, consistently "how can we improve this?"
We of course need to show progress. If everything has gone according to plan thus far has worked, but we are not showing progress, we may need to be patient to ensure we have adopted the change, but at some point, we need to objectively evaluate if our efforts and changes are really valid. it's of course possible that we could hit all cylinders and adopt a change that doesn't really achieve what we hoped. Does that mean the change was irrelevant? Possibly, but it may also mean that, again, we need to adjust our hypothesis. Typically though, sticky changes have to show progress worthy of the change.
In short, remember to be in love with the problem, and try to address the problem. Don't be too married to any solution that doesn't really accomplish the goal of solving the problem. Good goal to shoot for :).
You Wanna (R)Evolution? Live at #STPCON
Good morning, everyone. Happy Thursday to you :). STO-CON is heading into its final day, and I will say it again, it's been wonderful to have a software testing conference in my own backyard. It's also been fun to give tips and recommendations for places to go and things to see for people visiting from out of town, and to that end, for those who are here tonight, if you are interested in going out to dinner locally this evening, I'm happy to make recommendations and tag along ;).
First thing this morning we are starting off with is a panel with Dave Haeffner, Mike Lyles, Smita Mishra and Damian Synadinos. The topic is free form and a Q&A submitted by the audience, but the overall theme is (R)Evolution in software testing. Testing is evolving, and there is a sea change happening at the same time. We don't have a clear consensus as to what testing will be in the future, or who will be doing it. The whole "Testing is Dead" meme has proven to not be true. There will absolutely be testing, and there will be a lot of testing. The bigger question is "who will be doing it, and in what capacity?" This really doesn't surprise me, because I have been in software now for twenty-five years, and I can plainly say I do not test the same way today that I did twenty-five years ago. I use many of the same techniques, and principles relevant then are still relevant now, but the implementation and approach is different. Additionally, as I've learned and grown, I've extended my reach into other areas. Going forward, I would encourage any software tester not to just think "how can I improve my testing skills" (please understand, that is very important, and I encourage everyone to improve those skills) but also to consider "how can I help add value to the organization beyond my testing ability?"
I've much appreciated the fact that all three panelists are talking about adding value to their teams and also creating physical capital within yourselves as well. Damian emphasized the way that we speak, and that we understand the agreements we make, and the depth of those agreements. Smita emphasizes developing your social capital, in the sense that you may be doing amazing work at your own company, but long term that will not help you if no one else knows who you are or what you are doing. Invest in yourself, develop your craft, and make sure that people can discover that. If you think I'm going to take this opportunity to say "start a blog, maintain it, learn in public, and share your ups and downs"... well, yeah :).
It's no surprise that Dave is being asked about automation and tooling, because, well, that's what he does and what he's known for. One of the things I appreciate about what Dave does is that he maintains an active newsletter sharing tips about what he does and has implemented, and how he's gotten around challenges and issues. I appreciate these bits (yes, I'm a subscriber) in that he exemplifies a lot of the sea change we are seeing. Dave doesn't just state that he knows hat he is doing, he physically demonstrates it every week, at least as far as I am concerned. What's cool is that I am able to get access to what he has learned, consider it, see if it's something I can use, and then experiment with his findings myself. Dave of course got to field the "testing vs. checking" distinction, and how he used to think it was an unnecessary distinction, however, as he's been exploring other industries, he's come to see that there is a more nuanced way to look at it. Automation speed things up, it's a helper, but it's not going to think for itself. Automation and the checking element is helpful after the testing and discovery element has been completed and solidified. Damian makes the case that tools help us do things we need, but that tools will not always be appropriate for all people (and he gave a shout out to my session... thanks Damian :) ). For me, every programmer explores and tests to come up with their solution, and then runs checks to make sure their solution is still valid going forward (or at least, they should ;) ).
Damian also got to field a number of questions around metric and their relevance, and in most cases, Damian would be happy to see most of the metrics disappear, because they are either useless and the benign end, and dangerous in their manipulation at the more malignant end. The worst implementation of metrics are often applied not to measure processes, but to measure people, especially people against other people. Sadly, there are areas where people have top produce to an expected level. Think of sale people needing to meet sales goals. Fair or not, that's a metric that is frequently applied and has significant ramifications for those people. As testers, we'd love to believe we are isolated from most metrics, but in fact, we are not. Were it to be up to me, I would encourage testers to develop realistic and interesting goals for themselves to learn and to stretch, and to evaluate them regularly. Fact is, none of us can learn everything and be experts at everything. I'd rather have a tester who focuses on the goals they are passionate about, and do their best to develop some peripheral skills for broader application, but I don't want to put someone into a position where they are working on a goal that they hate. In that case, neither of us wins. They get frustrated and don't progress, and we don't get the benefit of their working on the things that they are excited about, and therefore willing to pour their energies into.
The irony of metrics is that everyone wants to see how close they are to complete automation, or if they are automating enough. Dave thinks they are asking the wrong question if that is their focus. He'd encourage organizations to look at their defect list, their customer logs and access reports, and see where people are actually focusing their efforts and sharing their pain. If your automation efforts are not covering those areas, perhaps target your focus there. In other words, don't think about "are we automating all the things" but "are we actually targeting our automation at the things that matter?" My guess is, the areas that are falling short are the less easy areas to automate.
Outsourcing is a reality, and it's universal. Every one of us outsources something we do, so we should not be surprised that testing gets outsourced as well. Crowdsourcing is becoming more common and an additional tool for organizations to use. As a coordinator and facilitator for Weekend Testing events, I know well the value of crowdsourcing and having many eyeballs in a short amount of time on a product providing great amounts of information, but without focus and targeting, the feedback you get back is likely unfocused and untargeted.
Many of the skills we learn do not come to us fully formed. Pardon the mental image, but a human mother does not give birth to a fully grown adult, ready and capable from the get go. Likewise, we don't become masters of any endeavor overnight, or with little effort. More often, we try, we experiment, we get frustrated, we try again, we do a little better, we move forward, sideways, and backwards, but each time, we learn and expand our experiments, until we get to a point where we have a natural facility.
Round Two extended out to the audience with ad-hoc questions, and the first question was communicating through outsourcing. How do you have effective communication with off-shore participants? Smita points out there are two issues, first is the conversation level, and the second is the time difference and isolation. It will never be the same as having the team all together in the same place. In these cases, communication needs to be clear, unambiguous and agreed upon. This reminds me of the reports I had to have reviewed by my leads a decade plus ago to be translated to Japanese and then communicated back in the opposite direction. We had to spend a significant amount of time to make sure that we were all communicating on the same wavelength. Often, that meant I had to rewrite and clarify my positions, and to do so frequently. Damian pointed out that location is not the only way that communication can go wrong, though the lossy communication is more pronounced. Reducing miscommunication comes by asking for clarity. In short, don't be afraid to ask. In my own work world, I joke that I am allowed "one completely stupid question per day" of everyone on my team. Yes, I actually word it like that. It's a bit of my self-deprecating humor, but it sends a message that I do not pretend I automatically know everything, and that I may not fully understand what is being asked or needed. The benefit of that is that it shows everyone else in my organization that they can approach me in the same manner. It's not foolproof, but it certainly helps.
First thing this morning we are starting off with is a panel with Dave Haeffner, Mike Lyles, Smita Mishra and Damian Synadinos. The topic is free form and a Q&A submitted by the audience, but the overall theme is (R)Evolution in software testing. Testing is evolving, and there is a sea change happening at the same time. We don't have a clear consensus as to what testing will be in the future, or who will be doing it. The whole "Testing is Dead" meme has proven to not be true. There will absolutely be testing, and there will be a lot of testing. The bigger question is "who will be doing it, and in what capacity?" This really doesn't surprise me, because I have been in software now for twenty-five years, and I can plainly say I do not test the same way today that I did twenty-five years ago. I use many of the same techniques, and principles relevant then are still relevant now, but the implementation and approach is different. Additionally, as I've learned and grown, I've extended my reach into other areas. Going forward, I would encourage any software tester not to just think "how can I improve my testing skills" (please understand, that is very important, and I encourage everyone to improve those skills) but also to consider "how can I help add value to the organization beyond my testing ability?"
I've much appreciated the fact that all three panelists are talking about adding value to their teams and also creating physical capital within yourselves as well. Damian emphasized the way that we speak, and that we understand the agreements we make, and the depth of those agreements. Smita emphasizes developing your social capital, in the sense that you may be doing amazing work at your own company, but long term that will not help you if no one else knows who you are or what you are doing. Invest in yourself, develop your craft, and make sure that people can discover that. If you think I'm going to take this opportunity to say "start a blog, maintain it, learn in public, and share your ups and downs"... well, yeah :).
It's no surprise that Dave is being asked about automation and tooling, because, well, that's what he does and what he's known for. One of the things I appreciate about what Dave does is that he maintains an active newsletter sharing tips about what he does and has implemented, and how he's gotten around challenges and issues. I appreciate these bits (yes, I'm a subscriber) in that he exemplifies a lot of the sea change we are seeing. Dave doesn't just state that he knows hat he is doing, he physically demonstrates it every week, at least as far as I am concerned. What's cool is that I am able to get access to what he has learned, consider it, see if it's something I can use, and then experiment with his findings myself. Dave of course got to field the "testing vs. checking" distinction, and how he used to think it was an unnecessary distinction, however, as he's been exploring other industries, he's come to see that there is a more nuanced way to look at it. Automation speed things up, it's a helper, but it's not going to think for itself. Automation and the checking element is helpful after the testing and discovery element has been completed and solidified. Damian makes the case that tools help us do things we need, but that tools will not always be appropriate for all people (and he gave a shout out to my session... thanks Damian :) ). For me, every programmer explores and tests to come up with their solution, and then runs checks to make sure their solution is still valid going forward (or at least, they should ;) ).
Damian also got to field a number of questions around metric and their relevance, and in most cases, Damian would be happy to see most of the metrics disappear, because they are either useless and the benign end, and dangerous in their manipulation at the more malignant end. The worst implementation of metrics are often applied not to measure processes, but to measure people, especially people against other people. Sadly, there are areas where people have top produce to an expected level. Think of sale people needing to meet sales goals. Fair or not, that's a metric that is frequently applied and has significant ramifications for those people. As testers, we'd love to believe we are isolated from most metrics, but in fact, we are not. Were it to be up to me, I would encourage testers to develop realistic and interesting goals for themselves to learn and to stretch, and to evaluate them regularly. Fact is, none of us can learn everything and be experts at everything. I'd rather have a tester who focuses on the goals they are passionate about, and do their best to develop some peripheral skills for broader application, but I don't want to put someone into a position where they are working on a goal that they hate. In that case, neither of us wins. They get frustrated and don't progress, and we don't get the benefit of their working on the things that they are excited about, and therefore willing to pour their energies into.
The irony of metrics is that everyone wants to see how close they are to complete automation, or if they are automating enough. Dave thinks they are asking the wrong question if that is their focus. He'd encourage organizations to look at their defect list, their customer logs and access reports, and see where people are actually focusing their efforts and sharing their pain. If your automation efforts are not covering those areas, perhaps target your focus there. In other words, don't think about "are we automating all the things" but "are we actually targeting our automation at the things that matter?" My guess is, the areas that are falling short are the less easy areas to automate.
Outsourcing is a reality, and it's universal. Every one of us outsources something we do, so we should not be surprised that testing gets outsourced as well. Crowdsourcing is becoming more common and an additional tool for organizations to use. As a coordinator and facilitator for Weekend Testing events, I know well the value of crowdsourcing and having many eyeballs in a short amount of time on a product providing great amounts of information, but without focus and targeting, the feedback you get back is likely unfocused and untargeted.
Many of the skills we learn do not come to us fully formed. Pardon the mental image, but a human mother does not give birth to a fully grown adult, ready and capable from the get go. Likewise, we don't become masters of any endeavor overnight, or with little effort. More often, we try, we experiment, we get frustrated, we try again, we do a little better, we move forward, sideways, and backwards, but each time, we learn and expand our experiments, until we get to a point where we have a natural facility.
Round Two extended out to the audience with ad-hoc questions, and the first question was communicating through outsourcing. How do you have effective communication with off-shore participants? Smita points out there are two issues, first is the conversation level, and the second is the time difference and isolation. It will never be the same as having the team all together in the same place. In these cases, communication needs to be clear, unambiguous and agreed upon. This reminds me of the reports I had to have reviewed by my leads a decade plus ago to be translated to Japanese and then communicated back in the opposite direction. We had to spend a significant amount of time to make sure that we were all communicating on the same wavelength. Often, that meant I had to rewrite and clarify my positions, and to do so frequently. Damian pointed out that location is not the only way that communication can go wrong, though the lossy communication is more pronounced. Reducing miscommunication comes by asking for clarity. In short, don't be afraid to ask. In my own work world, I joke that I am allowed "one completely stupid question per day" of everyone on my team. Yes, I actually word it like that. It's a bit of my self-deprecating humor, but it sends a message that I do not pretend I automatically know everything, and that I may not fully understand what is being asked or needed. The benefit of that is that it shows everyone else in my organization that they can approach me in the same manner. It's not foolproof, but it certainly helps.
Wednesday, April 6, 2016
Maximizing Success with Data Testing: Live at #STPCON
Regardless of what we develop, what platform, what use case, everything we do at some point comes down to data. Without data there's not much point to using any application.
Lots of applications require copious amounts of data, reliably accessible, reliably recreateable, and confirmed content that will meet the needs of our tests. At the same time, we may want to generate loads of fresh data to drive our applications, inform our decisions, or give us a sense of security that the data we create is safe and protected from prying eyes. Regardless of who you are and where you are in the application development cycle, data is vital, and its care, feeding and protection is critical.
Smita Mishra is giving us a run down on Big Data / Enterprise Data Warehouse / ETL Process (Extract, Transport, Load) and Business Intelligence practices. We can test with 1KB of data or 1TB of data. The principles of testing are the same, but the order of magnitude difference can be huge.
Big Data in my world view is used to describe really large sets, so large that it cannot easily fit into a standard database or file system. Smita points out that 5 Petabytes or more defines "Big Data". Smita also showed us an example of an "Internet Minute" and what happens and transmits during a typical minute over the Internet. Is anyone surprised that the largest bulk of data comes from Netflix ;)?
Big Data requires different approaches for storage and processing. Large parallel systems, databases of databases, distributed cloud system implementations, and large scale aggregation tools all come into play. Ultimately, it's just a broad array of tools designed to work together to cut massive data amounts to some manageable level.
In my own world, I have not yet had to get into really big data, but I do have to consider data that spans multiple machines and instances. Additionally, while I sometimes think Business Intelligence is an overused word and a bit flighty in its meaning, it really comes down to data mining, analytical processing, querying and reporting. That process itself is not too hard to wrap your head around, but again, the order of magnitude with Big Data applications makes it a more challenging endeavor. Order of operations, and aggregation/drill down becomes essential. Consider it a little bit like making Tradizionale Balsamic Vinegar, in the sense that your end product effectively gets moved to ever smaller barrels as the concentration level increases. I'll admit, that's a weird comparison, but in a sense it's apt. The Tradizionale process can't go backwards, and your data queries can't either.
There are some unique challenges related to data warehouse testing. Consistency and quality of data is problematic. Even in small sample sets, inconsistent data formats and missing values can mess up tests and deployments. Big Data takes those same issues and makes them writ large. we need to consider the entry points of our data, and to extend that Tradizionale example, each paring down step and aggregation entry point needs to verify consistency. If you find an error, stop that error at the point that you find it, and don't case it to be passed on and cascade through the system.
Lots of applications require copious amounts of data, reliably accessible, reliably recreateable, and confirmed content that will meet the needs of our tests. At the same time, we may want to generate loads of fresh data to drive our applications, inform our decisions, or give us a sense of security that the data we create is safe and protected from prying eyes. Regardless of who you are and where you are in the application development cycle, data is vital, and its care, feeding and protection is critical.
Smita Mishra is giving us a run down on Big Data / Enterprise Data Warehouse / ETL Process (Extract, Transport, Load) and Business Intelligence practices. We can test with 1KB of data or 1TB of data. The principles of testing are the same, but the order of magnitude difference can be huge.
Big Data in my world view is used to describe really large sets, so large that it cannot easily fit into a standard database or file system. Smita points out that 5 Petabytes or more defines "Big Data". Smita also showed us an example of an "Internet Minute" and what happens and transmits during a typical minute over the Internet. Is anyone surprised that the largest bulk of data comes from Netflix ;)?
Big Data requires different approaches for storage and processing. Large parallel systems, databases of databases, distributed cloud system implementations, and large scale aggregation tools all come into play. Ultimately, it's just a broad array of tools designed to work together to cut massive data amounts to some manageable level.
In my own world, I have not yet had to get into really big data, but I do have to consider data that spans multiple machines and instances. Additionally, while I sometimes think Business Intelligence is an overused word and a bit flighty in its meaning, it really comes down to data mining, analytical processing, querying and reporting. That process itself is not too hard to wrap your head around, but again, the order of magnitude with Big Data applications makes it a more challenging endeavor. Order of operations, and aggregation/drill down becomes essential. Consider it a little bit like making Tradizionale Balsamic Vinegar, in the sense that your end product effectively gets moved to ever smaller barrels as the concentration level increases. I'll admit, that's a weird comparison, but in a sense it's apt. The Tradizionale process can't go backwards, and your data queries can't either.
There are some unique challenges related to data warehouse testing. Consistency and quality of data is problematic. Even in small sample sets, inconsistent data formats and missing values can mess up tests and deployments. Big Data takes those same issues and makes them writ large. we need to consider the entry points of our data, and to extend that Tradizionale example, each paring down step and aggregation entry point needs to verify consistency. If you find an error, stop that error at the point that you find it, and don't case it to be passed on and cascade through the system.
Continuous Testing: Live from #STPCON
Continuous Testing is one of the holy grails of deployment. We've got the continuous integration piece down, for the most part. Continuous testing, of course fits into that paradigm. The integration piece isn't helpful if changes break what's already there and working. From my own experience, the dream of push button build, test and deploy feels close at hand, but somehow there's enough variance that we don't quite get there 100%. This is for an organization that deploys a few times a week. Now imagine the challenge if you are at an organization that deploys multiple times every day.
Neil Manvar is describing his time at Yahoo! working on their mail tool, and some of the challenges they faced getting the automated testing pieces to harmonize with the integration and deployment steps. Some of the ways they dealt with making a change from a more traditional waterfall development approach to an Agile implementation was to emphasize more manual testing, more often. Additionally, the development team aimed to help in the process of testing. Plus in that there were more testers, but minus in that programmers weren't programming when they were testing. Later, the brute force release approach became too costly to continue, so the next step was to set up a CI server using selenium tests, running an internal Grid and coordinating the build and test steps with Jenkins. Can you guess where we might be going from here ;)?
Yep, unreliable tests, need to rework and maintain brittle tests, limited testability, limited testability baked in, etc. Thus, while the automation was a step towards the goal, there was still so much new feature work happening that the automation efforts couldn't keep up (hence, more of a reliance on even more manual testing). A plus from this was that the test teams and the development teams started talking to each other about ways of writing robust and reliable tests, and providing the infrastructure and tooling to make that happen.
This led to a focus on continuous delivery, rapid iteration, and allowing for the time to develop and deploy the needed automated tests. In addition, the new management mandate was regular delivery of software, not just over a period of weeks, but multiple deploys per day. What helped considerably was that senior management gave the teams the time and bandwidth to help them implement the automation, including health checks, maintenance steps, iron out the deployment process, and ultimately enforce a discipline to pull requests and merges so that the deployment pipeline, including build, test, and deployment, would be as hands off as possible. Incrementally updating also improved the stability of the product considerably (which, I can totally appreciate :) ).
Ok, so this is all interesting, but what was the long term effects of making these changes? Ultimately, it allowed QA to expand their skill set and take the busywork off of their plates (or a large amount of it) so they could focus on more interesting problems. Developers were able to emphasize development, including unit tests and more iterative improvement. Accountability was easier to implement, and see where the issues were introduced, and by who. Additionally, a new standard of quality was established, and overall, features were delivered more quickly, uptime improved, new features were deployed and overall satisfaction with the product improved (and with it, revenue, which is always a nice plus ;) ).
Neil Manvar is describing his time at Yahoo! working on their mail tool, and some of the challenges they faced getting the automated testing pieces to harmonize with the integration and deployment steps. Some of the ways they dealt with making a change from a more traditional waterfall development approach to an Agile implementation was to emphasize more manual testing, more often. Additionally, the development team aimed to help in the process of testing. Plus in that there were more testers, but minus in that programmers weren't programming when they were testing. Later, the brute force release approach became too costly to continue, so the next step was to set up a CI server using selenium tests, running an internal Grid and coordinating the build and test steps with Jenkins. Can you guess where we might be going from here ;)?
Yep, unreliable tests, need to rework and maintain brittle tests, limited testability, limited testability baked in, etc. Thus, while the automation was a step towards the goal, there was still so much new feature work happening that the automation efforts couldn't keep up (hence, more of a reliance on even more manual testing). A plus from this was that the test teams and the development teams started talking to each other about ways of writing robust and reliable tests, and providing the infrastructure and tooling to make that happen.
This led to a focus on continuous delivery, rapid iteration, and allowing for the time to develop and deploy the needed automated tests. In addition, the new management mandate was regular delivery of software, not just over a period of weeks, but multiple deploys per day. What helped considerably was that senior management gave the teams the time and bandwidth to help them implement the automation, including health checks, maintenance steps, iron out the deployment process, and ultimately enforce a discipline to pull requests and merges so that the deployment pipeline, including build, test, and deployment, would be as hands off as possible. Incrementally updating also improved the stability of the product considerably (which, I can totally appreciate :) ).
Ok, so this is all interesting, but what was the long term effects of making these changes? Ultimately, it allowed QA to expand their skill set and take the busywork off of their plates (or a large amount of it) so they could focus on more interesting problems. Developers were able to emphasize development, including unit tests and more iterative improvement. Accountability was easier to implement, and see where the issues were introduced, and by who. Additionally, a new standard of quality was established, and overall, features were delivered more quickly, uptime improved, new features were deployed and overall satisfaction with the product improved (and with it, revenue, which is always a nice plus ;) ).
Demystifying the Test Automation Pyramid: Live from #STPCON
One of the key things I look for in talks is not to find things that I am good at (otherwise, really, what's the point of going?) but to visit and examine areas where I could use improvement or, to be frank, may just not be all that savvy about at all. Jim Hazen and I have communicated through Twitter and testing channels for years, and I have often appreciated his insights on writing test automation, specifically because he highlights the pitfalls that often get glossed over.
One of those areas is the Test Automation Pyramid. What's that you say? Well, there are different levels of automation, and foundations that automation is built on. Really, that's the core of the idea of a test automation pyramid. We build up from a lower foundation to get more specific and more focused, just as a pyramid start broad and wide and rises to a point at the top. Different levels and contexts require a different focus, some larger than others, but all structurally important. Commonly, this pyramid is structured in three layers: Unit Tests at the base , Service Tests in the middle, and UI tests at the top. Some benefits here are that by considering these tests in their places and consider them in order of creation, we put test design up front and carry it through the whole process.
Sometimes we look at the pyramid and we equate placement with importance. UI tests are at the top, and the smallest part of the pyramid, so that means they really aren't as important... right? Well, no, that's not what it means at all. What it does mean is that, by focusing on the Unit and Service tests, we are able to put in place logic and fixtures we can use higher up the pyramid. UI tests have value, but yes they can be finicky if the work below it hasn't been done or done well. Also, many think that because the Unit tests are at the bottom of the pyramid that that should be most of your tests. Well, not quite. The idea of unit tests is atomic testing of elements (methods, functions, etc.) and to make sure that we have a strong sense the tests are validating the functionality of the smallest elements of the code. They re numerous by their very nature, and by developing robust unit tests, we can develop a base for Service and UI tests. So does that mean the pyramid diagram is wrong? Potentially, but it still serves as a good way to look at the structuring of tests, timing and precedence. The pyramid is meant to be a guideline, not hard and fast rules.
Think about it this way. The extent of unit testing may result in a test for each line of code, and sometimes more. Positive and negative conditions may be exercised, and the basis of those unit tests can be used to help develop fixtures to help test higher up the pyramid. Additionally at the unit test level, we can look at ways to build testability and instrumentation into the product from the ground up. Where is that insight likely to happen? Probably at the component level. write a lot of unit tests, and it's likely procs and fixtures will come out of it, because good programmers are lazy, and that is a very good thing :).
When someone tells you they are going to do 100% automation, take a step back and realize that it's never going to happen. There's a lot that can be done, but there are still a lot of tests that require eyes, ears, and brains to determine and evaluate. Having said that, the tools are getting us closer to that 100%, but we still have quite a ways to go.
Another helpful rule of thumb is to think of tests getting more granular as you go down the pyramid, and becoming more complex the higher up the pyramid you go. Tools help, but no tools covers all of these areas comprehensively. No one tool will solve all of your problems. Custom code will very likely be required to make sure each of the levels and layers can play well with each other. Someone needs to write that code, and its likely a new tester writing automated tests will not be the ideal candidate for that job (they may well grow into it over time, but unlikely at the early part of their careers).
We also need to have a chat with management and set realistic expectations. See above about one tool to rule them all (hint, there isn't one). You need a staff with proper skills at each level. You may get lucky and find all of them in one person, but it's more likely you will have a bunch of people that have those skills. Ge them together and talk amongst themselves.
Overall interesting model, much of it makes sense, some of it breaks down, but taken in the spirit in which it was originally designed, it's quite useful, even if it should be more of a broad parallelogram, but that's nowhere near as catchy ;).
One of those areas is the Test Automation Pyramid. What's that you say? Well, there are different levels of automation, and foundations that automation is built on. Really, that's the core of the idea of a test automation pyramid. We build up from a lower foundation to get more specific and more focused, just as a pyramid start broad and wide and rises to a point at the top. Different levels and contexts require a different focus, some larger than others, but all structurally important. Commonly, this pyramid is structured in three layers: Unit Tests at the base , Service Tests in the middle, and UI tests at the top. Some benefits here are that by considering these tests in their places and consider them in order of creation, we put test design up front and carry it through the whole process.
Sometimes we look at the pyramid and we equate placement with importance. UI tests are at the top, and the smallest part of the pyramid, so that means they really aren't as important... right? Well, no, that's not what it means at all. What it does mean is that, by focusing on the Unit and Service tests, we are able to put in place logic and fixtures we can use higher up the pyramid. UI tests have value, but yes they can be finicky if the work below it hasn't been done or done well. Also, many think that because the Unit tests are at the bottom of the pyramid that that should be most of your tests. Well, not quite. The idea of unit tests is atomic testing of elements (methods, functions, etc.) and to make sure that we have a strong sense the tests are validating the functionality of the smallest elements of the code. They re numerous by their very nature, and by developing robust unit tests, we can develop a base for Service and UI tests. So does that mean the pyramid diagram is wrong? Potentially, but it still serves as a good way to look at the structuring of tests, timing and precedence. The pyramid is meant to be a guideline, not hard and fast rules.
Think about it this way. The extent of unit testing may result in a test for each line of code, and sometimes more. Positive and negative conditions may be exercised, and the basis of those unit tests can be used to help develop fixtures to help test higher up the pyramid. Additionally at the unit test level, we can look at ways to build testability and instrumentation into the product from the ground up. Where is that insight likely to happen? Probably at the component level. write a lot of unit tests, and it's likely procs and fixtures will come out of it, because good programmers are lazy, and that is a very good thing :).
When someone tells you they are going to do 100% automation, take a step back and realize that it's never going to happen. There's a lot that can be done, but there are still a lot of tests that require eyes, ears, and brains to determine and evaluate. Having said that, the tools are getting us closer to that 100%, but we still have quite a ways to go.
Another helpful rule of thumb is to think of tests getting more granular as you go down the pyramid, and becoming more complex the higher up the pyramid you go. Tools help, but no tools covers all of these areas comprehensively. No one tool will solve all of your problems. Custom code will very likely be required to make sure each of the levels and layers can play well with each other. Someone needs to write that code, and its likely a new tester writing automated tests will not be the ideal candidate for that job (they may well grow into it over time, but unlikely at the early part of their careers).
We also need to have a chat with management and set realistic expectations. See above about one tool to rule them all (hint, there isn't one). You need a staff with proper skills at each level. You may get lucky and find all of them in one person, but it's more likely you will have a bunch of people that have those skills. Ge them together and talk amongst themselves.
Overall interesting model, much of it makes sense, some of it breaks down, but taken in the spirit in which it was originally designed, it's quite useful, even if it should be more of a broad parallelogram, but that's nowhere near as catchy ;).
Automation for the People: Live from #STPCON
Automation is often couched in the terms of tools and test automation. Christin Wiedemann thinks we need to look at this whole automation thing differently.
I've long been a believer that people put themselves into a box when they say they are good or not good at test automation. Part of the problem is that many people don't give themselves credit for the automation they do every day.have you written a script to run a sequence of commands, so you don't have to type them all the time. Guess what? You're automating.at it's core, that's really what automation is, it's a way to get sequential steps to run in the order you want, without you personally having to enter those commands. Everything else is an order of magnitude from that concept.
One of the biggest issues with automation is "waste". Yes, automation efforts can be every bit as wasteful as the manual efforts and waste they are hoping to replace. Too often, we emphasize the coding aspect of the endeavor, without really looking at all of the other elements that contribute to that.
Have we ever stopped to think about how silly the term "Automated Tester" sounds? What is an Automated Tester? is it a cyborg? A machine? When this term is being used, we really mean "a tester who is proficient at writing software test automation", but it also points to a bias and an attitude we need to think about. We want to automate testing... why? We want to get people out of the equation... again, why? I know what I often say is that I want to get the "busywork" out of the way, so that I can save my eyes and my attention for the really interesting stuff, or the areas that have not been explored yet.
We don't build code, we build a product. In that sense, every one of us in the engineering and delivery process is a developer. Yes, testers are developers, even if we may not be programmers (and again, more of you do more programming than you give yourselves credit for). There are many areas that automation will struggle to be of a benefit to, though each year it narrows the current gaps. Automation does offer some tremendous benefits when you need repeatability, or when you want to parallelize work. Running fifty tests on fifty different machines is a real plus for time. Setting up machines and getting them to an interesting state is also very time consuming, and there is a place where I wholeheartedly welcome automation. Time to market can influence a need for automation, but make no mistake, automation takes time and resources. It may set us up for a later time saving, but it will be expensive up front. That's not a criticism, but it is a reality check ;).
When we say "we want to replace manual testing with automation", what are we actually saying? How will we do it? What is the method we will employ? When will we do it? How long will it take? What's the most important areas to cover? If we can't quantify these questions, we will have a really hard time putting together a successful implementation.
There are a lot of myths around automation. It will save us time. It will save us money. It will get us to market faster. Can those ideals come to fruition? Sure, but probably not right away. In fact, at the beginning of an automation effort, you may go in the red on all of these areas before you get any sustainable benefits. Make no mistake, I am not a luddite decrying automation, I use it daily and I love using it. Deploying releases would be murderous without it. Setting up my development environment by hand would take a lot of time. Serially running tests can sap even the most energetic. Still, there's a lot that goes into creating automation, and there's a lot of work that needs to happen up front before any automation can happen. St the moment, systems cannot invent tests automatically (though there are some algorithms that can create dynamic test branches that on the surface look like they are creating tests out of thin air, but even there, it's based on known knowns). When we deal with new development and features, there needs to be thinking and planning and experimentation (exploring) while the programming is happening and, we hope, while unit tests are being written.
One thought I would say before we discuss the obvious (that is, coding) is that anyone involved in automation needs to know how to design repeatable, reliable and informative tests. I have met excellent programmers who struggle with designing good tests, and I've met people who can develop great tests but struggle with the coding part. Solution: get those two people together. Put that chocolate into the peanut butter (you may have to be my age to get that reference ;) ). Test automation requires planning, preparation, creation, execution, analysis, and reporting. All of those need to be considered and developed, and news flash, the automation programmer may not be the person best suited to perform all of those tasks :).
Christin makes the case that automation can be helped by developing personas, not so much for developing tests, but to determine what skills contribute to the process and who might have those skills. Personas go beyond a laundry list of attributes, it humanizes them and helps us see who can be helpful for each part of the journey. Remember, our goal here is to maximize the possibility our efforts will be successful, and to do that, it would help a lot to get as many people involved in it as possible, and leveraging the skills they bring to the table.
I've long been a believer that people put themselves into a box when they say they are good or not good at test automation. Part of the problem is that many people don't give themselves credit for the automation they do every day.have you written a script to run a sequence of commands, so you don't have to type them all the time. Guess what? You're automating.at it's core, that's really what automation is, it's a way to get sequential steps to run in the order you want, without you personally having to enter those commands. Everything else is an order of magnitude from that concept.
One of the biggest issues with automation is "waste". Yes, automation efforts can be every bit as wasteful as the manual efforts and waste they are hoping to replace. Too often, we emphasize the coding aspect of the endeavor, without really looking at all of the other elements that contribute to that.
Have we ever stopped to think about how silly the term "Automated Tester" sounds? What is an Automated Tester? is it a cyborg? A machine? When this term is being used, we really mean "a tester who is proficient at writing software test automation", but it also points to a bias and an attitude we need to think about. We want to automate testing... why? We want to get people out of the equation... again, why? I know what I often say is that I want to get the "busywork" out of the way, so that I can save my eyes and my attention for the really interesting stuff, or the areas that have not been explored yet.
We don't build code, we build a product. In that sense, every one of us in the engineering and delivery process is a developer. Yes, testers are developers, even if we may not be programmers (and again, more of you do more programming than you give yourselves credit for). There are many areas that automation will struggle to be of a benefit to, though each year it narrows the current gaps. Automation does offer some tremendous benefits when you need repeatability, or when you want to parallelize work. Running fifty tests on fifty different machines is a real plus for time. Setting up machines and getting them to an interesting state is also very time consuming, and there is a place where I wholeheartedly welcome automation. Time to market can influence a need for automation, but make no mistake, automation takes time and resources. It may set us up for a later time saving, but it will be expensive up front. That's not a criticism, but it is a reality check ;).
When we say "we want to replace manual testing with automation", what are we actually saying? How will we do it? What is the method we will employ? When will we do it? How long will it take? What's the most important areas to cover? If we can't quantify these questions, we will have a really hard time putting together a successful implementation.
There are a lot of myths around automation. It will save us time. It will save us money. It will get us to market faster. Can those ideals come to fruition? Sure, but probably not right away. In fact, at the beginning of an automation effort, you may go in the red on all of these areas before you get any sustainable benefits. Make no mistake, I am not a luddite decrying automation, I use it daily and I love using it. Deploying releases would be murderous without it. Setting up my development environment by hand would take a lot of time. Serially running tests can sap even the most energetic. Still, there's a lot that goes into creating automation, and there's a lot of work that needs to happen up front before any automation can happen. St the moment, systems cannot invent tests automatically (though there are some algorithms that can create dynamic test branches that on the surface look like they are creating tests out of thin air, but even there, it's based on known knowns). When we deal with new development and features, there needs to be thinking and planning and experimentation (exploring) while the programming is happening and, we hope, while unit tests are being written.
One thought I would say before we discuss the obvious (that is, coding) is that anyone involved in automation needs to know how to design repeatable, reliable and informative tests. I have met excellent programmers who struggle with designing good tests, and I've met people who can develop great tests but struggle with the coding part. Solution: get those two people together. Put that chocolate into the peanut butter (you may have to be my age to get that reference ;) ). Test automation requires planning, preparation, creation, execution, analysis, and reporting. All of those need to be considered and developed, and news flash, the automation programmer may not be the person best suited to perform all of those tasks :).
Christin makes the case that automation can be helped by developing personas, not so much for developing tests, but to determine what skills contribute to the process and who might have those skills. Personas go beyond a laundry list of attributes, it humanizes them and helps us see who can be helpful for each part of the journey. Remember, our goal here is to maximize the possibility our efforts will be successful, and to do that, it would help a lot to get as many people involved in it as possible, and leveraging the skills they bring to the table.
Home Field Advantage: Welcome to #STPCON
It's here. After months of waiting working, tweaking and applying ideas, Software Test Professionals Conference (STP-Con) has come to Millbrae, California. I cannot begin to explain how nice it feels to be able to attend a conference that consists of total travel time from my house to venue of less than fifteen minutes.
I had a great opportunity Monday to teach and facilitate a workshop on "Teaching New Software Testers", and I received some great feedback about where the material is on track, where it can be improved, and some new ideas to consider adding to future presentations. Since it's an extra pay item, I'm not going to blog about all of the details from that talk, but if you read my blog regularly, you probably already know what I've covered :).
Tuesday night we had a Meet & Greet at Steelhead Brewery in Burlingame, and I was really happy to have two special guest attend with me; my daughters Karina and Amber. They have both expressed interest about learning more about tech and how it might be something they can contribute to in the future, and it was wonderful to see so many of my colleagues reach out to them, include them in conversations, talk about their careers, and other avenues that might get them excited. I want to give special thanks to Smita Mishra for taking the girls under her wing and introducing them to so many people, and to Ben Kelly for hanging out with them and talking all things Japan and Game of Thrones. On the drive home last night, they were happy and excited to have attended, and mentioned key conversation points and things that interested them.
Today is the first day of the conference proper, and that means the TESTHEAD live blog is now in full swing. There will be individual posts for each session other than mine (I'll post a summary of my session later ;) ).
We start off today with Karen Jonson giving a keynote about "How Nancy Drew Prepared Me to Become a Software Tester", but before she got into the main topic, she gave some great advice for career curation, including encouraging attendees to attend sessions that are relevant to their current work, but to also look for something that makes you stretch or may be outside of your current focus. You may or may not be working at the same company a year from now, but your career carries with you. Attend sessions that will grow you not just for now, but into the future, too.
For those not familiar with Nancy Drew, she's an iconic literary figure in children's books. She has different names in different countries, but she's a young lady who solved mysteries. a quote include in Karen's talk was that "Women in many occupations told of learning from Nancy to see adventures in problem solving and the joy of self-reliance". That's a great description of a software tester, too, isn't it :)? Every product is its own unique mystery. Now, to be frank, I didn't read many Nancy Drew books, but I read the Hardy Boys, which was the mystery series aimed for boys, while Nancy Drew was aimed to girls. If that seems a little closed minded, please realize, I was a boy in the seventies. I did boy things, but later on, I learned more about Nancy Drew, and especially wanted to learn more about her when my daughters came into my life. The Hardy Boys and Nancy Drew books are similar in format, and in delivery. They both let the reader get drawn into problems and challenges, and follow along as the protagonist learns more about the situations, and they get to test out their own hypothesis about "whodunnit". These really are wonderful primers to help encourage young readers to see the mysteries and how they might solve them.
I've been a fan of the metaphor of tester as "beat reporter", but "detective" is a natural fit as well. We often have to take on different domains. They put themselves in unfamiliar situations. They observe, they take notes, the look for situations that could potentially be dangerous, and they present the case to find the "whodunnits". Karen mentions a book called "Making Thinking Visible", and how it allows people to visualize problems. The basic idea is the triplet of See/Think/Wonder. Testing at its core uses these three words, if you step back and think about it. Additionally, another triplet is Think/Pair/Share. We don't want testing to be a mystery, we want to be open and share our findings. The more people that look at testing as cognitively challenging and interesting, the more people will get involved with it, and frankly, we need more testers, even if tester is not part of their title.
Consider the fact that what we think about things changes as we get new information. We are conditioned to think tht changing our minds shows a weakness. It shouldn't. We should embrace the idea that "I used to think.... but now I think..." because that means that we are actively considering what we are learning. We should not get so tied up in our world view, because new information can change the entire trajectory of what we are doing. We should be open to and welcome that, even though that may be uncomfortable. We should embrace active thinking, and we should approach things with wonder. Believe me, that can be hard when you've tested something multiple times. "been there, done that" is not just mind numbing, it's dangerous, because that's the condition where we are most vulnerable to missing things. We get blind to what we see all the time, and we take shortcuts in our heads because we know what's happening... or so we think.
Ultimately, the same traits that excite the imagination the way that Nancy Drew mysteries (and OK, Hardy Boys mysteries for me when I was a kid), they really do give some great parallels as to how we can re-engage and make testing much more interesting.
I had a great opportunity Monday to teach and facilitate a workshop on "Teaching New Software Testers", and I received some great feedback about where the material is on track, where it can be improved, and some new ideas to consider adding to future presentations. Since it's an extra pay item, I'm not going to blog about all of the details from that talk, but if you read my blog regularly, you probably already know what I've covered :).
Tuesday night we had a Meet & Greet at Steelhead Brewery in Burlingame, and I was really happy to have two special guest attend with me; my daughters Karina and Amber. They have both expressed interest about learning more about tech and how it might be something they can contribute to in the future, and it was wonderful to see so many of my colleagues reach out to them, include them in conversations, talk about their careers, and other avenues that might get them excited. I want to give special thanks to Smita Mishra for taking the girls under her wing and introducing them to so many people, and to Ben Kelly for hanging out with them and talking all things Japan and Game of Thrones. On the drive home last night, they were happy and excited to have attended, and mentioned key conversation points and things that interested them.
Today is the first day of the conference proper, and that means the TESTHEAD live blog is now in full swing. There will be individual posts for each session other than mine (I'll post a summary of my session later ;) ).
We start off today with Karen Jonson giving a keynote about "How Nancy Drew Prepared Me to Become a Software Tester", but before she got into the main topic, she gave some great advice for career curation, including encouraging attendees to attend sessions that are relevant to their current work, but to also look for something that makes you stretch or may be outside of your current focus. You may or may not be working at the same company a year from now, but your career carries with you. Attend sessions that will grow you not just for now, but into the future, too.
For those not familiar with Nancy Drew, she's an iconic literary figure in children's books. She has different names in different countries, but she's a young lady who solved mysteries. a quote include in Karen's talk was that "Women in many occupations told of learning from Nancy to see adventures in problem solving and the joy of self-reliance". That's a great description of a software tester, too, isn't it :)? Every product is its own unique mystery. Now, to be frank, I didn't read many Nancy Drew books, but I read the Hardy Boys, which was the mystery series aimed for boys, while Nancy Drew was aimed to girls. If that seems a little closed minded, please realize, I was a boy in the seventies. I did boy things, but later on, I learned more about Nancy Drew, and especially wanted to learn more about her when my daughters came into my life. The Hardy Boys and Nancy Drew books are similar in format, and in delivery. They both let the reader get drawn into problems and challenges, and follow along as the protagonist learns more about the situations, and they get to test out their own hypothesis about "whodunnit". These really are wonderful primers to help encourage young readers to see the mysteries and how they might solve them.
I've been a fan of the metaphor of tester as "beat reporter", but "detective" is a natural fit as well. We often have to take on different domains. They put themselves in unfamiliar situations. They observe, they take notes, the look for situations that could potentially be dangerous, and they present the case to find the "whodunnits". Karen mentions a book called "Making Thinking Visible", and how it allows people to visualize problems. The basic idea is the triplet of See/Think/Wonder. Testing at its core uses these three words, if you step back and think about it. Additionally, another triplet is Think/Pair/Share. We don't want testing to be a mystery, we want to be open and share our findings. The more people that look at testing as cognitively challenging and interesting, the more people will get involved with it, and frankly, we need more testers, even if tester is not part of their title.
Consider the fact that what we think about things changes as we get new information. We are conditioned to think tht changing our minds shows a weakness. It shouldn't. We should embrace the idea that "I used to think.... but now I think..." because that means that we are actively considering what we are learning. We should not get so tied up in our world view, because new information can change the entire trajectory of what we are doing. We should be open to and welcome that, even though that may be uncomfortable. We should embrace active thinking, and we should approach things with wonder. Believe me, that can be hard when you've tested something multiple times. "been there, done that" is not just mind numbing, it's dangerous, because that's the condition where we are most vulnerable to missing things. We get blind to what we see all the time, and we take shortcuts in our heads because we know what's happening... or so we think.
Ultimately, the same traits that excite the imagination the way that Nancy Drew mysteries (and OK, Hardy Boys mysteries for me when I was a kid), they really do give some great parallels as to how we can re-engage and make testing much more interesting.
Subscribe to:
Posts (Atom)