I've deliberately taken some time to step away from blogging and post less the last few weeks. Part of that was because I had completed a large multi part series (more about that in a bit), and part of that was a conscientious decision to spend some more time with my family during the holidays, but I can't let the year end without my "the year that was" post, and see if I can find another line from the Talking Heads "Once in a Lifetime" to keep the string going. For the fourth year, I'm still able to do that :).
This year was definitely one of digging into my mind and my experiences, and taking advantage of the fact that what I can learn, and what I struggle with, has a value to others as well. I focused the first part of the year on running through the Practicum approach and Selenium 2, specifically David Burns book. I found this process to be enjoyable, enlightening, and yes, it required me to be willing to dig further than the printed material. Plain and simple, even with the best guide, the best materials, and the most specific examples, the natural drift of software and software revisions means that we need to be alert to the fact that we have to do some digging of our own. Sometimes, we get frustrated in our efforts, but if we continue to dig, and ask questions while we dig, we can do more and get better at what we are doing.
SummerQAmp was an important focus for me and others this year, and we expanded on the modules that were offered. We made inroads on what we covered, but also discovered that the materials and the model we were using was less effective as we tried to branch out and try different ideas. The biggest challenge? How to make the material engaging with self directed readers and interactions. Much of the best software testing training that is out there that focuses on skills-based testing is best learned with a group of people discussing and debating the ideas and approaches. Taking that approach and making it work for a single individual that needs to learn the material is a challenge we are still working on, and I am hopeful we will make good on improving this process in 2014.
Weekend Testing in the Americas has had a great run, and it has been blessed with additional facilitators who are helping to take the weight off of my shoulders and bring in ideas and approaches that are different from mine (which is great :) ). Justin Rohrman and JeanAnn Harrison have been regular contributors and facilitators, and to both of them I owe a huge debt of gratitude. There's definitely more opportunities to dig for more and better ideas when there are additional facilitators helping to look for and pitch ideas.
If there was one concept or test idea/approach that became foremost in my thoughts this year, it would have to be "accessibility" or how to interact with systems and information for those with physical disabilities. Much of my work this year was associated with learning about and working with stories that focused on exactly how we can make our interactions better for those who cannot see or interact with systems in ways that most of us take for granted. I worked primarily with an intern to help make this a better focus for our product, and to learn how to continually ask questions and consider ways that we can better focus on what we do to deliver a usable and worthwhile experience to all of our users.
I participated in three conferences this year, two of them here in the United States, and one in Sweden. STP-CON in April (held in San Diego, CA) was a chance for me to talk about Agile testing and how I adapted to being a Lone Contributor on a team (a situation that I am no longer specifically doing, as I am now part of a larger testing organization, but still small enough that many of the lessons still apply). August 2012 saw me presenting in Madison, Wisconsin at CAST 2013 and Test Retreat and Test Leadership Camp (a continuous week long event of learning, interacting, and developing ideas that I would present in other talks and places). Finally, I was invited to speak on two topics (balancing automated and exploratory testing, and how to "stop faking it" in our work lives) at the Øredev 2013 conference in Malmö, Sweden.
In addition to formal conferences, I participated in numerous Meetup events in and around the San Francisco Bay Area, and what's more, with Curtis Stuerenberg, helped to launch the Bay Area Software Testers Meetup group. This is a general interest software testing group, with the goal of expanding into topics that, we hope, go beyond just testing and into aspects that can help software testers get a better feel and understanding for more areas of the software development business.
An interesting challenge came my way in 2013. I've been blessed with additional outlets to write and present my ideas beyond this blog. Smartbear, Zephyr, Atlassian, Testing Circus, Tea Time With Testers, and StickyMinds have all been outlets where I have been able to present ideas and write to a broader audience, and my thanks to the many readers of this blog who have seen those articles, shared them with others, and helped make it possible for me to keep writing for these sites. I appreciate the vote of confidence and the comments and shares of my work with others, and if you will keep sharing, I will keep writing :).
The project that, for some, will be the most recognizable for 2013 was, in many ways, just another bold boast that I figured would be some basic ideas that I would write on each day. Instead, my expansion and developing workshops on the "99 Ways to Become a Better Software Tester" e-book offered by the Ministry of Testing in the UK became a multi-month process, and one in which I had to do some significant digging to bring to completion. While the series is now finished, the ideas and the processes I learned are still having ripple effects. I think there is more where this came from, and I want to explore even more how I can expand on the ideas I wrote about, and make them better.
Through the process of being on the Board of Directors for the Association for Software Testing and working as the chair for the Education Special interest Group, I learned a lot about how to better deliver software testing training, and to help expand on the mission of AST and how we deliver training and the people involved in that training. I took a step back this year to let others teach and become leaders, and I am grateful for the level of participation and focus given by so many people to step up and help teach others. It cements my belief and testimony that software testers in our broad and worldwide community contain some of the most giving and helpful people I've ever met.
This year saw me interacting with two additional initiatives, one I've been involved with for a few years now, and one that is very new to me. Miagi-Do had a banner year, in which we started a blog, developed more challenges and sought to get more involved as a group in the broader software testing discussion. We brought on board many additional testers, many of which are doing so much to put into practice ways to help share and grow the skills of the broader testing community (many of the current facilitators for Weekend Testing around the world are also Miagi-do students and instructors). Additionally, I was invited to participate in a mentoring program through PerScholas, and have interacted with a number of their STeP program graduates (many of whom have also come through and been participants in Weekend Testing as well).
All in all, this has been a year of digging, a year of discovery, a year of new efforts and making new friends. It's been a year of transition, of picking up, and letting go. A year of seeing changes, and adapting to them. It's been a year of learning from others, and teaching what I can to those interested in learning from me in whatever capacity I can teach. Most importantly, it has shown me that there are many areas in testing I can learn more about, perform better, and get more involved than I already am. What will 2014 bring? My guess is that it will be a year of new challenges, new ideas, and more chances to interact with my peers in ways I may not have considered. Once thing's for sure, it will not be "Same as it ever was" ;).
Tuesday, December 31, 2013
Thursday, December 19, 2013
Sharing Some December Cheer With the #Packt$5 #ebookbonanza
For those of you who read my blog regularly, you know that I do not sell advertising here, and I do not typically "shill" for other companies, for products, or any of that.
Packt has, from Deember 19, 2013 through January 3, 2014, priced all of their eBooks and videos at $5 each. That's roughly 1700 titles, all told, and you can buy as many as you want at that price. – more information is available at http://bit.ly/1jdCr2W
If you would like to take advantage of this, then go to the link above and make your selections. I know I will be (I have some titles related to node, nginx, and some open source graphical applications already picked out, and yes, they will likewise be reviewed soon :) ).
However, from time to time, a company that is helpful to me, and gives me items to help me learn and share on my blog, has something they do that I think is worth talking about and sharing with others. Packt Publishing has been the source of many of my book reviews, and they frequently give me the eBook versions that I review, which I appreciate. In that spirit, they asked if I would highlight a special December program they are running, and I said, "sure, I'd be glad to".
Packt has, from Deember 19, 2013 through January 3, 2014, priced all of their eBooks and videos at $5 each. That's roughly 1700 titles, all told, and you can buy as many as you want at that price. – more information is available at http://bit.ly/1jdCr2W
If you would like to take advantage of this, then go to the link above and make your selections. I know I will be (I have some titles related to node, nginx, and some open source graphical applications already picked out, and yes, they will likewise be reviewed soon :) ).
Wednesday, December 18, 2013
On Surroundings, Clutter and Getting Out of My Own Way
One of the things that I have found over the past several years, and it seems I am destined to ever relearn this, is that I can always find a way to get in my own way. I am a fan of classification, organization, attempts to streamline and de-clutter, yet I always seem to have too much stuff surrounding me. Note, this is not a complaint, but an observation. I love the home I live in. It's not opulent, not hugely spacious, but it serves the needs of a family of five quite nicely. The one drawback, at least for me, is that there has to be someplace where chaos can exist so that order can take place elsewhere. Ironically, those places seem to be my garage and my office (i.e. my two "creative domains").
There is a joke among "organizistas" that allows for the "ten percent rule". No space is ever completely organized. There is always some place where there is chaos and disorganization. Every immaculate kitchen has a junk drawer. There is one closet that is not perfectly organized, and one room tends to be a jumble of things. Often, it is because the activity or purpose of those areas is perpetually labeled "miscellaneous". It's where all of the odds and ends tend to end up. My office gets this designation.
Understand, my office is the most "unlovable" room in our house. For various reasons, the previous owners of our home, when they chose to build an upstairs addition over the garage, decided to route out a section of the garage to put the staircase. They made a large main room, a half bath, and a "bedroom". Well, some room had to go over the staircase, and to be creative about it, the "bedroom" was the spot chosen to be over the stairwell.
Were the room to be a perfect rectangle, it would be nine feet nine inches wide by thirteen feet six inches long, with five and a half feet by two feet taken up by closet space. But this isn't that room. Instead, there is "the bump", four feet six inches by four feet four inches, that goes over the stairs. This unfortunately placed bump makes the room a bit, shall we say, less than ideally shaped. To the previous owners credit, they did make some interesting use of the space and made an additional closet out of it, albeit a closet with a floor that rises above the floor of the main room.
There were a few owners that owned this house before we did, and everyone who owned the house before us used this room for the same purpose. It was "the office" or "the spare room", because it did not fit any pre-conceived plan to be used as a regular bedroom. Feng Shui was not taken into account when this room was made ;). Thus, I tended to do the same, and for fifteen years, this room has undergone several transformations, purges, redesigns, clutter abatements, organizations, reorganizations, and somehow, I still manage to get work done in here.
The room has to serve several purposes, mainly because there's no place else to practically have me do these things. Not only is my office my work and testing space, my writing haven, my exercise room, my craft room, and my recording studio, in a pinch, or a late night or early morning call or work assignment space needed, it doubles as a meeting room, a lounge, a reading area, and occasionally, a guest bedroom (and likewise, on occasion, "my" bedroom, when I find it late and I just need to get a couple hours rest before I get up and go to it again).
The biggest challenge with a multi-purpose room is the need to switch tasks, and have a system where I can do so effectively. "A place for everything and everything in its place" is great in theory, but very often, I find myself having to contend with two or three projects needing the same space. thus, what often happens is there is a bundling up of stuff, stuffing wherever I can make room, and then getting back to it when I can. It's a room that invites fiddling, tweaking and moving stuff around to find that "best spot" to put everything. In short, it's a place where, very often, I find that I get in my own way; my best laid plans for one project/process cause me to be horribly inefficient for another.
In an ideal world, I would just say "OK, well, I will set up another room to do that other thing", but that's not really an option. To keep a happy home, this room is my domain for whatever project I need to be working on, and as such, it has to work, odd and ends and all.
I liken this to my testing career. Too often, people on the outside look at testing as though it's a neat little atomic and simple process. You look at something, you expect a behavior, you confirm the behavior, it passes. If it doesn't confirm the behavior, it fails. It limit testing to a very simple yes or no system that is efficient, elegant, and easy to organize. Hate to be the bearer of bad news, but testing is none of those things. It's wildly cross-discipline, it takes from many places, it needs many unique resources, some of which are really hard to categorize.
Testers don't just have a small set of tools at their disposal. In fact, they literally have the entire world of knowledge to work with to help define their tests, their strategy and their approach. When you picture a heavily cluttered and wildly akimbo lab of a mad scientist, that is the essence of a tester, and the way a tester often works.
Does this mean that we cannot be organized, that we cannot be efficient? Of course not, but it does mean that we run the risk of getting in our own way. We are often enticed by some new tool, app or device that will make things simpler, more elegant, less crowded, more organized and methodical. Unfortunately, I tend to look at testers (not all, mind you) as those people that fall under the "ten percent chaos rule". We have needs for more than the "nicely organized" environment that fits one purpose very well. We shift, we switch, we weave and bob through ideas and applications, and sometimes, that requires a tolerance for "stuff and clutter" that goes beyond what many think is comfortable.
As of now, I have some semblance of control over my domain, but alas, this is a good day. Some days are more chaotic than others. In my world, the best thing I have discovered is that "nothing stays the same, and nothing ever changes unless there's some pain involved". That doesn't mean that I don't keep trying to organize and get everything where I want it to be. It means that I do my best to rid myself of distractions until I actually need them, and then know where those distractions have gone. Once I know where they are, I try my mightiest to forget they are there, at least for the time being. Ultimately, I have to be aware of the fact that, most of the time, the person that gets in the way of my progressing on something, ultimately, is me. Thus, it is best to do whatever it takes to get me out of my own way whenever possible :).
There is a joke among "organizistas" that allows for the "ten percent rule". No space is ever completely organized. There is always some place where there is chaos and disorganization. Every immaculate kitchen has a junk drawer. There is one closet that is not perfectly organized, and one room tends to be a jumble of things. Often, it is because the activity or purpose of those areas is perpetually labeled "miscellaneous". It's where all of the odds and ends tend to end up. My office gets this designation.
Understand, my office is the most "unlovable" room in our house. For various reasons, the previous owners of our home, when they chose to build an upstairs addition over the garage, decided to route out a section of the garage to put the staircase. They made a large main room, a half bath, and a "bedroom". Well, some room had to go over the staircase, and to be creative about it, the "bedroom" was the spot chosen to be over the stairwell.
Were the room to be a perfect rectangle, it would be nine feet nine inches wide by thirteen feet six inches long, with five and a half feet by two feet taken up by closet space. But this isn't that room. Instead, there is "the bump", four feet six inches by four feet four inches, that goes over the stairs. This unfortunately placed bump makes the room a bit, shall we say, less than ideally shaped. To the previous owners credit, they did make some interesting use of the space and made an additional closet out of it, albeit a closet with a floor that rises above the floor of the main room.
There were a few owners that owned this house before we did, and everyone who owned the house before us used this room for the same purpose. It was "the office" or "the spare room", because it did not fit any pre-conceived plan to be used as a regular bedroom. Feng Shui was not taken into account when this room was made ;). Thus, I tended to do the same, and for fifteen years, this room has undergone several transformations, purges, redesigns, clutter abatements, organizations, reorganizations, and somehow, I still manage to get work done in here.
The room has to serve several purposes, mainly because there's no place else to practically have me do these things. Not only is my office my work and testing space, my writing haven, my exercise room, my craft room, and my recording studio, in a pinch, or a late night or early morning call or work assignment space needed, it doubles as a meeting room, a lounge, a reading area, and occasionally, a guest bedroom (and likewise, on occasion, "my" bedroom, when I find it late and I just need to get a couple hours rest before I get up and go to it again).
The biggest challenge with a multi-purpose room is the need to switch tasks, and have a system where I can do so effectively. "A place for everything and everything in its place" is great in theory, but very often, I find myself having to contend with two or three projects needing the same space. thus, what often happens is there is a bundling up of stuff, stuffing wherever I can make room, and then getting back to it when I can. It's a room that invites fiddling, tweaking and moving stuff around to find that "best spot" to put everything. In short, it's a place where, very often, I find that I get in my own way; my best laid plans for one project/process cause me to be horribly inefficient for another.
In an ideal world, I would just say "OK, well, I will set up another room to do that other thing", but that's not really an option. To keep a happy home, this room is my domain for whatever project I need to be working on, and as such, it has to work, odd and ends and all.
I liken this to my testing career. Too often, people on the outside look at testing as though it's a neat little atomic and simple process. You look at something, you expect a behavior, you confirm the behavior, it passes. If it doesn't confirm the behavior, it fails. It limit testing to a very simple yes or no system that is efficient, elegant, and easy to organize. Hate to be the bearer of bad news, but testing is none of those things. It's wildly cross-discipline, it takes from many places, it needs many unique resources, some of which are really hard to categorize.
Testers don't just have a small set of tools at their disposal. In fact, they literally have the entire world of knowledge to work with to help define their tests, their strategy and their approach. When you picture a heavily cluttered and wildly akimbo lab of a mad scientist, that is the essence of a tester, and the way a tester often works.
Does this mean that we cannot be organized, that we cannot be efficient? Of course not, but it does mean that we run the risk of getting in our own way. We are often enticed by some new tool, app or device that will make things simpler, more elegant, less crowded, more organized and methodical. Unfortunately, I tend to look at testers (not all, mind you) as those people that fall under the "ten percent chaos rule". We have needs for more than the "nicely organized" environment that fits one purpose very well. We shift, we switch, we weave and bob through ideas and applications, and sometimes, that requires a tolerance for "stuff and clutter" that goes beyond what many think is comfortable.
As of now, I have some semblance of control over my domain, but alas, this is a good day. Some days are more chaotic than others. In my world, the best thing I have discovered is that "nothing stays the same, and nothing ever changes unless there's some pain involved". That doesn't mean that I don't keep trying to organize and get everything where I want it to be. It means that I do my best to rid myself of distractions until I actually need them, and then know where those distractions have gone. Once I know where they are, I try my mightiest to forget they are there, at least for the time being. Ultimately, I have to be aware of the fact that, most of the time, the person that gets in the way of my progressing on something, ultimately, is me. Thus, it is best to do whatever it takes to get me out of my own way whenever possible :).
Friday, December 13, 2013
Let's Stop Faking It: My "Other" Talk from Øredev 2013
I realized this morning, as I went back to the Øredev 2013 site, that my video for my 2nd talk had been posted. My talk about Balancing ATDD, GUI Automation and Exploratory Testing has been getting most of the press, as well as repeat performances for the Bay Area Software Testers and Silicon Valley Software Quality Association meetups. Still, I want to talk about this one, as it really draws on a lot of things I've been thinking about, as well as a moment of embarrassment and realization during the Q&A.
This talk was all about expertise, and how we all deal with it and represent ourselves around it. I draw a lot on my own experiences and how I like to deal with ways to get beyond faking what I know and moving forward into real knowledge and skill. For those who want to see the actual Prezi presentation up close and personal, it's here.
This talk was all about expertise, and how we all deal with it and represent ourselves around it. I draw a lot on my own experiences and how I like to deal with ways to get beyond faking what I know and moving forward into real knowledge and skill. For those who want to see the actual Prezi presentation up close and personal, it's here.
The embarrassing moment? In the Q&A, I was asked how I felt about "Impostor Syndrome". I totally botch the definition, and I give an answer that is 100% opposite as to what Impostor Syndome actually is. Maybe it was a combination of lack of sleep or adrenaline rushing through me, but I worked through the answer, moved on to other participants, but as I would look back at the person who asked about Impostor Syndrome, I could tell by the look on their face that I didn't answer their question.
Fortunately, one of the participants clarified for me what I had messed up. Impostor Syndrome is where you are competent but believe you are not. I felt tremendously relieved to hear this, as it gave me a chance to re-consider, re-frame and add to my understanding. Of course, some are going to ask "How could you give a talk about faking it and not have had a good grasp of "Impostor Syndrome"? It's a fair question. I didn't prepare my talk from a perspective of psychological reasons why people fake it, I approached it from my own memories and working with other people I directly knew. "Impostor Syndrome" was a term that, until that very moment, I'd never actually heard. I mentally walked through what I figured it would be, and answered. Were I perhaps better rested and less amped at that immediate moment, I might have said "what do you consider good examples of Impostor Syndrome?" so I could see their context, and then get an understanding for what they mean by the term. It's what I should have done, but didn't.
So why was that great? It gave me a very public opportunity to come clean, to not be evasive, to actually address head on something I didn't know, but "pretended" to (ironic considering the topic, huh ;) ?). It let me live my creed, and allowed that to be the lingering memory, rather than having no one call me on it, but then have several people walking around afterwards saying "wow, what a hypocrite, he just totally faked his way through that answer!"
It was a terrific reminder, and one I won't soon forget.
So why was that great? It gave me a very public opportunity to come clean, to not be evasive, to actually address head on something I didn't know, but "pretended" to (ironic considering the topic, huh ;) ?). It let me live my creed, and allowed that to be the lingering memory, rather than having no one call me on it, but then have several people walking around afterwards saying "wow, what a hypocrite, he just totally faked his way through that answer!"
It was a terrific reminder, and one I won't soon forget.
Labels:
career development,
community,
conferences,
ethics,
goals,
learning,
life experience,
listening,
mentoring,
motivation,
Øredev,
people,
philosophy,
skills development,
software testing,
speaking
Wednesday, December 4, 2013
Live from Climate Corp, it's BAST!!!
Another day, another Meetup, and this time, I don't have to present, so I can do what I usually do at these events, which is stream whatever flows into my brain and capture on bits to share with the rest of you.
Again, as in all of these events, this will be raw, dis-jointed and potentially confusing, but the benefit is you get to hear it here and now as I hear it (well, for the most part). If you want fully coherent, wait a while ;).
Jim stated our talk tonight about how he was able to set up a variety of opportunities selling Agile Methodologies to organizations (businesses, government, etc.), and realizing that many of the Agile Methodologies were, well, problematic. While working through some of the issues, they opted to try to apply Lean principles and, in the process, developed a variety of methods around a "kanban" system ("kanban" being a Japanese term that means "ticket" or "the card that moves"). Anyway, that's Jim, and that's what he said he wanted to get out of the way right now.
What he really wants to talk about is the fact that, if you work for a team or company that prides itself or markets itself as a "highly productive team", it's very likely that you are working in the worst environment possible. Wait, what?! Why would that be the worst possible things?
Part of the reason is that with that "high productivity" comes a lot of gamesmanship. It's also incredibly subjective; productive according to whom? Do they mean the team as a whole,? Do they mean the development methodology? Do they mean how much they push out? Who is defining or describing the "productivity"? We love to believe that everyone is all on the same page at the same time, and everyone is working in tandem.
Anyone who is in testing knows that this is rarely the case, unless you are fortunate to have the opportunity to pair with the developers as they code and you are riding along as a testing navigator (and yes, I do that from time to time, but not always, and not nearly as often as I would like to). More times than not, we get our stories dropped in a group at the end of the coding time, and testing then spins up and frantically tries to get the testing done.
Jim makes the point that we are seeing an increase of productivity in some aspects, but we are seeing a proportional decrease in actual effectiveness, because much is getting done, but little is being accomplished (or it's overloading the downstream parts of the process, i.e. those of us in testing or ops).
So how can we solve this? First is recognize that productivity silos exist, and that they are evil. The more functionality that is sandwiched into one role, regardless of how productive they are, they are not going to be able to increase the entire teams ability to produce, release, or deploy because while one group is hyper-optimized, other groups are woefully under-prepared and over-burdened, because they do not have a complementary option. think of trying to fit a 12 foot diameter water pipe and its flow through a connecting pipe that is only three feet in diameter. Doesn't matter how much you put into the 12 foot pipe, the three foot pipe is going to be a bottleneck.
Think DevOps... and before anyone thinks "DevOps" as a team, it's not. It's a mindset and an approach. The goal, though, is that all of the teams need to be able to get the optimization that the programming group has. For the effectiveness of such a team to get better, all of the connection points need to be addressed, and all of the players need to be on the same page. That could mean many things. Sometimes it means that some of the programmers are going to be up in the middle of the night when something goes wrong ;). Silos are easy to talk about, but very hard to optimize and balance in reality.
"Productivity is just doing lots of stuff". Actually, James used a more colorful metaphor, but you get the point ;). Bad productivity is a reality. Lots of stuff is getting done, but it is really worthwhile? Is the chase for the almighty "velocity" really a worthwhile goal? Are we actually adding to the value of what we are creating? Or are we creating technical, intellectual, and expectational debt?
One of the goals behind Personal Kanban is to make it a pull system, where we grab what we can work on as soon as it's available. There are a variety of impulses that drive this, Demand pull, of course, is that the market is telling us "hey, we want this, please make it for us". Internal pull is when our internal voices of our companies are saying "we need this and fast" without any correlation to what our customers want. Aim for the former, but let's do all we can to resist the latter.
One of the real challenges of what we produce in a software centric world is that our "product" is extremely ephemeral. What we produce is difficult to visualize. Because of that, we have to be mindful of exactly what we are making, how that stream is created, and what has to happen from start to finish. The tricky part is that, more times than not, the initial value stream starts from the end and works its way backwards. What results, very often, is that we are missing stuff, and we don't know what we are missing, or why. If we are focused on productivity, we do not have any incentive to seek out the holes that exist. To be effective, we need to be aware, as much as possible, where the deep holes might actually be, and find them. Quickly is best :).
One of the concepts that we bring with Kanban, and that comes from the world of manufacturing, is the idea of "inventory". The goal is to have the right amount of materials on hand, but to not have too much inventory. Think of an auto manufacturer that produces tens of thousands of cars, only to discover that the market has no use for their cars, and therefore, they have produced tens of thousands of cars that no one wants. We'd see that as suicidal. Well, that works with code, too. When we write code that is bloated, filled with features that no one really wants, we are doing the same thing. We don't want to write a lot of code, we want to write the right code, an make sure that the right code WORKS!!! Note: this is not an Agile thing, or a Lean thing, or a Waterfall thing. It's a human thing, and we all feel its effects.
OK, so we have your attention. Productivity bad, effective good. That's great, but how can we use this in our testing? How can we recalibrate ourselves to a mindset of effectiveness? The most difficult thing is that, when we find that we have a back log of done things, rather than pushing those downstream to work harder and process more, we need to actually stop doing things and examine issues that might be "done" and see if "indeed" it really is (think of a story delivered two weeks ago, but that testing just got to today, and in the process, they found a bug. What does a programmer do? Well, very likely, they have to go back two weeks in memory to think about what they actually did, and that context switch is expensive. Compare that to programming happening and testing happening in a tight feedback loop. Which approach do you think will be more effective? Why?
Chances are, because if we can address something shortly after we worked on it, we would be fresh and remember where we were, what we were doing, and how we might be able to address it. Delays make that process much harder. Therefore, anything that creates a delay between Done in one sphere and Done in another will cause, surprise, more delays! the best thing we can do is build a narrative of our work that is shared and that is coordinated. If we realize that we have ten stories in backlog, the best thing we could do is stop adding to the back log.
Jim brought up the Buddhist concept of "mindfulness", and the ability to be mindful of the shared story comes to the fore when we are not overloaded and focused on production at the expense of everything else. Avoid becoming task focused, aim to be situationally aware. Specifically look for opportunities to collaborate. That doesn't necessarily mean "pair programming" (that might be a better be described as "teaming"), but it does mean try to find ways that we can leverage what each other is doing, not just to get stuff done, but to more likely ensure that we are doing the right things at the right time, in the most effective way.
One very special aspect that needs to be addressed... there are things that are just plain "hard". Very often, the way we deal with hard problems is we move over to easier things to work on, and when we get them done, we feel accomplished. It's human nature, but it doesn't address the fact that the hard stuff is not getting dealt with. the dancer is that, at the end of the sprint/iteration/cycle/etc. all that's left is the hard stuff, and now we have a danger of not being able to deliver because we didn't address the hard stuff until we reached a point of no return. when we get to things that are complex, it doesn't necessarily mean that we have failed. It may just mean that we need more than one person to accomplish the task, or that we need to have a more thorough understanding of what the actual goal is (it may be vital, or it may not even be necessary, but we will not know until we dig into it, and the longer we wait to make that dig, the less likely we will get a clear view of where on the spectrum that "hard problem" lies.
Back to testers in this environment. How can we help move towards effectiveness? Put simply, get us involved earlier, and have us test as soon as humanly possible. We don't have to test on finished code. we can test on requirements, on story points, on initial builds to check proof of concept. Sure, it's not finished, it's not elegant, but we don't really care. We don't want to polish, we want to check for structural integrity. Better to do that as early as humanly possible. We can focus on polish later, but if we can help discover structural integrity issues early, we can fix those before they become much mode difficult (and time sensitive) later.
---
My thanks to Jim and Tonianne for coming out tonight to speak with us, and thanks for helping us cover something a bit different than normal. I enjoyed this session a great deal, and I hope we can continue to deliver unique and interesting presentations at BAST Meetups. My thanks also to Curtis and Climate Corporation for the space and all the leg work for making this happen. Most of all, thanks to everyone who came out to hear tonight's talk. You are making the case for us that there is definitely a need for this type of information and interaction. Here's wishing everyone a Happy Hanukkah, Wonderful Winter Solstice, Merry Christmas, Happy New Year an any other holiday I might be overlooking, but we wish you the best for the rest of this crazy active month. We look forward to seeing you all again sometime in mid January, topic to be determined ;).
So for those curious, here's where we are at tonight, and what we are covering:
"Productivity Sucks, How About Being Effective" an evening with Jim Benson
Wednesday, December 4, 2013
6:00 PM - 8:30 PM
The Climate Corporation
201 3rd Street #1100, San Francisco, CA
Wednesday, December 4, 2013
6:00 PM - 8:30 PM
The Climate Corporation
201 3rd Street #1100, San Francisco, CA
Jim Benson, the co-author of "Personal Kanban", and a contributing author of "Beyond Agile: Tales of Continuous Improvement", is here to talk about about the myths surrounding our work and how we think of it, specifically around how we determine what is productive, and isn't. Tonianne DeMaria Barry, the co-author of "Personal Kanban" (and his partner at Modus Cooperandi) will also be sharing some of her experiences from a variety of successful "kaizen" camps that have been held around the world.
What we are hoping to do with tonight's talk (and several more in the future) is to expand the range of topics that get covered in a typical software testing Meetup. Our goal is to help develop a broad cross section of skills for testers, not just those in the nuts and bolts of direct and specific testing skills, or programming/toolsmith topics (nothing wrong with those, of course, we have them, too).
What we are hoping to do with tonight's talk (and several more in the future) is to expand the range of topics that get covered in a typical software testing Meetup. Our goal is to help develop a broad cross section of skills for testers, not just those in the nuts and bolts of direct and specific testing skills, or programming/toolsmith topics (nothing wrong with those, of course, we have them, too).
At the moment, though, we are eating food, drinking beer, wine and soft drinks, and conversing. Thus, I feel it vital to schmooze and welcome our guests, but I will be back shortly ;).
---
Jim stated our talk tonight about how he was able to set up a variety of opportunities selling Agile Methodologies to organizations (businesses, government, etc.), and realizing that many of the Agile Methodologies were, well, problematic. While working through some of the issues, they opted to try to apply Lean principles and, in the process, developed a variety of methods around a "kanban" system ("kanban" being a Japanese term that means "ticket" or "the card that moves"). Anyway, that's Jim, and that's what he said he wanted to get out of the way right now.
What he really wants to talk about is the fact that, if you work for a team or company that prides itself or markets itself as a "highly productive team", it's very likely that you are working in the worst environment possible. Wait, what?! Why would that be the worst possible things?
Part of the reason is that with that "high productivity" comes a lot of gamesmanship. It's also incredibly subjective; productive according to whom? Do they mean the team as a whole,? Do they mean the development methodology? Do they mean how much they push out? Who is defining or describing the "productivity"? We love to believe that everyone is all on the same page at the same time, and everyone is working in tandem.
Anyone who is in testing knows that this is rarely the case, unless you are fortunate to have the opportunity to pair with the developers as they code and you are riding along as a testing navigator (and yes, I do that from time to time, but not always, and not nearly as often as I would like to). More times than not, we get our stories dropped in a group at the end of the coding time, and testing then spins up and frantically tries to get the testing done.
Jim makes the point that we are seeing an increase of productivity in some aspects, but we are seeing a proportional decrease in actual effectiveness, because much is getting done, but little is being accomplished (or it's overloading the downstream parts of the process, i.e. those of us in testing or ops).
So how can we solve this? First is recognize that productivity silos exist, and that they are evil. The more functionality that is sandwiched into one role, regardless of how productive they are, they are not going to be able to increase the entire teams ability to produce, release, or deploy because while one group is hyper-optimized, other groups are woefully under-prepared and over-burdened, because they do not have a complementary option. think of trying to fit a 12 foot diameter water pipe and its flow through a connecting pipe that is only three feet in diameter. Doesn't matter how much you put into the 12 foot pipe, the three foot pipe is going to be a bottleneck.
Think DevOps... and before anyone thinks "DevOps" as a team, it's not. It's a mindset and an approach. The goal, though, is that all of the teams need to be able to get the optimization that the programming group has. For the effectiveness of such a team to get better, all of the connection points need to be addressed, and all of the players need to be on the same page. That could mean many things. Sometimes it means that some of the programmers are going to be up in the middle of the night when something goes wrong ;). Silos are easy to talk about, but very hard to optimize and balance in reality.
"Productivity is just doing lots of stuff". Actually, James used a more colorful metaphor, but you get the point ;). Bad productivity is a reality. Lots of stuff is getting done, but it is really worthwhile? Is the chase for the almighty "velocity" really a worthwhile goal? Are we actually adding to the value of what we are creating? Or are we creating technical, intellectual, and expectational debt?
One of the goals behind Personal Kanban is to make it a pull system, where we grab what we can work on as soon as it's available. There are a variety of impulses that drive this, Demand pull, of course, is that the market is telling us "hey, we want this, please make it for us". Internal pull is when our internal voices of our companies are saying "we need this and fast" without any correlation to what our customers want. Aim for the former, but let's do all we can to resist the latter.
One of the real challenges of what we produce in a software centric world is that our "product" is extremely ephemeral. What we produce is difficult to visualize. Because of that, we have to be mindful of exactly what we are making, how that stream is created, and what has to happen from start to finish. The tricky part is that, more times than not, the initial value stream starts from the end and works its way backwards. What results, very often, is that we are missing stuff, and we don't know what we are missing, or why. If we are focused on productivity, we do not have any incentive to seek out the holes that exist. To be effective, we need to be aware, as much as possible, where the deep holes might actually be, and find them. Quickly is best :).
One of the concepts that we bring with Kanban, and that comes from the world of manufacturing, is the idea of "inventory". The goal is to have the right amount of materials on hand, but to not have too much inventory. Think of an auto manufacturer that produces tens of thousands of cars, only to discover that the market has no use for their cars, and therefore, they have produced tens of thousands of cars that no one wants. We'd see that as suicidal. Well, that works with code, too. When we write code that is bloated, filled with features that no one really wants, we are doing the same thing. We don't want to write a lot of code, we want to write the right code, an make sure that the right code WORKS!!! Note: this is not an Agile thing, or a Lean thing, or a Waterfall thing. It's a human thing, and we all feel its effects.
OK, so we have your attention. Productivity bad, effective good. That's great, but how can we use this in our testing? How can we recalibrate ourselves to a mindset of effectiveness? The most difficult thing is that, when we find that we have a back log of done things, rather than pushing those downstream to work harder and process more, we need to actually stop doing things and examine issues that might be "done" and see if "indeed" it really is (think of a story delivered two weeks ago, but that testing just got to today, and in the process, they found a bug. What does a programmer do? Well, very likely, they have to go back two weeks in memory to think about what they actually did, and that context switch is expensive. Compare that to programming happening and testing happening in a tight feedback loop. Which approach do you think will be more effective? Why?
Chances are, because if we can address something shortly after we worked on it, we would be fresh and remember where we were, what we were doing, and how we might be able to address it. Delays make that process much harder. Therefore, anything that creates a delay between Done in one sphere and Done in another will cause, surprise, more delays! the best thing we can do is build a narrative of our work that is shared and that is coordinated. If we realize that we have ten stories in backlog, the best thing we could do is stop adding to the back log.
Jim brought up the Buddhist concept of "mindfulness", and the ability to be mindful of the shared story comes to the fore when we are not overloaded and focused on production at the expense of everything else. Avoid becoming task focused, aim to be situationally aware. Specifically look for opportunities to collaborate. That doesn't necessarily mean "pair programming" (that might be a better be described as "teaming"), but it does mean try to find ways that we can leverage what each other is doing, not just to get stuff done, but to more likely ensure that we are doing the right things at the right time, in the most effective way.
One very special aspect that needs to be addressed... there are things that are just plain "hard". Very often, the way we deal with hard problems is we move over to easier things to work on, and when we get them done, we feel accomplished. It's human nature, but it doesn't address the fact that the hard stuff is not getting dealt with. the dancer is that, at the end of the sprint/iteration/cycle/etc. all that's left is the hard stuff, and now we have a danger of not being able to deliver because we didn't address the hard stuff until we reached a point of no return. when we get to things that are complex, it doesn't necessarily mean that we have failed. It may just mean that we need more than one person to accomplish the task, or that we need to have a more thorough understanding of what the actual goal is (it may be vital, or it may not even be necessary, but we will not know until we dig into it, and the longer we wait to make that dig, the less likely we will get a clear view of where on the spectrum that "hard problem" lies.
Back to testers in this environment. How can we help move towards effectiveness? Put simply, get us involved earlier, and have us test as soon as humanly possible. We don't have to test on finished code. we can test on requirements, on story points, on initial builds to check proof of concept. Sure, it's not finished, it's not elegant, but we don't really care. We don't want to polish, we want to check for structural integrity. Better to do that as early as humanly possible. We can focus on polish later, but if we can help discover structural integrity issues early, we can fix those before they become much mode difficult (and time sensitive) later.
---
My thanks to Jim and Tonianne for coming out tonight to speak with us, and thanks for helping us cover something a bit different than normal. I enjoyed this session a great deal, and I hope we can continue to deliver unique and interesting presentations at BAST Meetups. My thanks also to Curtis and Climate Corporation for the space and all the leg work for making this happen. Most of all, thanks to everyone who came out to hear tonight's talk. You are making the case for us that there is definitely a need for this type of information and interaction. Here's wishing everyone a Happy Hanukkah, Wonderful Winter Solstice, Merry Christmas, Happy New Year an any other holiday I might be overlooking, but we wish you the best for the rest of this crazy active month. We look forward to seeing you all again sometime in mid January, topic to be determined ;).
Monday, December 2, 2013
A Survey on “The State of Testing”
So let me set the stage here….
Joel Montvelisky (http://qablog.practitest.com) wanted to write a post about the advances that have taken place in the tester-verse in the last 5-10 years. While he was trying to put this post together, he came to the conclusion that there really isn’t a centralized set of information or trends as to what is happening in the testing world.
What does a tester do when they can’t find that information? They take it upon themselves to make their own. Trust me, I understand this logic completely ;).
Thus, Joel reach out to Lalit Bhamare (who edits Tea Time with Testers) to create and conduct a "State of Testing" survey. The purpose? To provide a “ snapshot” each year of what the “ reality” of the testing world is, and see if we can follow various trends as they shift year by year.
Of course, for a survey like this to be effective, a lot of people need to participate. To get a lot of people, exposure is necessary. To get that exposure, it would help if other testers would participate and spread the word.
Therefore, I feel it my duty and privilege to be part of drawing attention to this initiative, and I’m going to ask all of my tester friends out there to do likewise and take part in this survey.
Great, dude, I’m in… what survey?
THIS ONE!
Joel is planning on posting the survey (you can be added to the email notification list via the link above) by Thursday or Friday, December 4th or 5th. It will be up for about ten days.
So what am I asking?
1. Go to the link and subscribe so that they can contact you when the survey goes live.
2. Participate in the survey and give you honest feedback.
3. Make a point to tell as many testers as you can to likewise participate in the survey.
Tired of others telling you what testing is like and what the issues surrounding it are? Would you like to contribute in this endeavor and make your voice heard? Then by all means, please do, and tell other testers about it and get them to participate. Knowledge is power, after all :).
Joel Montvelisky (http://qablog.practitest.com) wanted to write a post about the advances that have taken place in the tester-verse in the last 5-10 years. While he was trying to put this post together, he came to the conclusion that there really isn’t a centralized set of information or trends as to what is happening in the testing world.
What does a tester do when they can’t find that information? They take it upon themselves to make their own. Trust me, I understand this logic completely ;).
Thus, Joel reach out to Lalit Bhamare (who edits Tea Time with Testers) to create and conduct a "State of Testing" survey. The purpose? To provide a “ snapshot” each year of what the “ reality” of the testing world is, and see if we can follow various trends as they shift year by year.
Of course, for a survey like this to be effective, a lot of people need to participate. To get a lot of people, exposure is necessary. To get that exposure, it would help if other testers would participate and spread the word.
Therefore, I feel it my duty and privilege to be part of drawing attention to this initiative, and I’m going to ask all of my tester friends out there to do likewise and take part in this survey.
Great, dude, I’m in… what survey?
THIS ONE!
Joel is planning on posting the survey (you can be added to the email notification list via the link above) by Thursday or Friday, December 4th or 5th. It will be up for about ten days.
So what am I asking?
1. Go to the link and subscribe so that they can contact you when the survey goes live.
2. Participate in the survey and give you honest feedback.
3. Make a point to tell as many testers as you can to likewise participate in the survey.
Tired of others telling you what testing is like and what the issues surrounding it are? Would you like to contribute in this endeavor and make your voice heard? Then by all means, please do, and tell other testers about it and get them to participate. Knowledge is power, after all :).
Subscribe to:
Posts (Atom)