A couple of weeks ago, I had the pleasure of going up to Calgary, Alberta to facilitate the Calgary Perspectives On Software Testing (POST) workshop. Our topic was "Software Testing as a Service". My job? Keep the conversation rolling and give everyone who wanted a chance to participate a chance to participate. The presenters were given 20 minutes to present an idea, and the conversations that followed went as long as people wanted to let them go (a hallmark of these style of facilitated workshops).
Out of all the presentations, none generated as much conversation and discussion as the talk that Marianne Murray gave, titled "Testing as a Service and the Introvert". We riffed on this topic for almost two hours, and there was plenty more to go by the time we decided we should give other attendees a chance to present their talks.
Why does this topic resonate with so many people? I think in some ways, it comes down to the fact that we have been taught our whole lives that there are two kinds of people. Introverts and Extroverts. Introverts are the quiet, shy ones that sit in the corner and would rather read a book than talk to people. Extroverts are lively party animals that are the center of attention. You're either one or the other. Problem is, human beings are not quite that easy to pigeon hole. More to the point, there's a continuum of experiences, and I believe that people shift over time, or that specific stimuli and learned behaviors help to shape the perceived image. Why do I hold this opinion? Because I can point to periods of my life when I would have been painted as a total introvert (my elementary and middle school years) and a total extrovert (my years as a musician singing in a glam rock band).
When left to be seen as they are, we have to ask, what was the difference? Did I magically change from an introvert to an extrovert? Is it something that can be easily switched on or off? Or is it that I learned behaviors that I realized were advantageous, and somehow "overcame" my more natural introverted tendencies? Or, really, did I ever actually overcome those introvert tendencies at all? And what does this all have to do with testing anyway?
Marianne boiled it down to a basic tendency, and I thought this insight was actually quite telling. If we wanted to take the most basic and fundamental aspect of what it means to be an introvert or an extrovert, she used a simple picture of two heads. The first was an input (think into the eyes or ears) and that it was processed, and output was generated (think out of the mouth or by physical actions). Marianne argues that the true difference between the extrovert and the introvert is the processing time. For the extreme extrovert, the processing time and function is minimal; input received, output expressed, with little to no perceivable processing. The introvert, on the other hand, was reflected by a much more circular feedback process; the input would come in, and the information would be processed in a much longer loop, before the resulting output was seen (speaking or action).
In my experiences, I think this is a really good representation of the difference, in that it gets to the heart of what we do. Still, I cannot help but think that, even with this approach, that we are missing something when we describe someone as an introvert or an extrovert, and I think, personally, that something is "experience".
For me, when I was younger, I often found myself at a loss for words or an inability to speak to certain things with others. It's not because I was stupid or uneducated (relatively speaking) but because I hadn't had a lot of experience talking about those things. Add to the fact that I had a tremendously short attention span (still do, though now I realize that certain "fixations' can monopolize the process) and what you had was someone who would internalize a lot of things, mull them over, and then act when I had a chance to consider everything. In short, I would be defined in those years as a classic introvert. Medications such as Methylphenidate Hydrochloride, which I took from 3rd grade through about 7th grade, and then off and on again through high school, tended to make that "classic introversion" more pronounced.
At the same time, I loved performing in plays, singing on stage and doing other things that are more commonly associated with people who are "extroverts". At first, I did get anxious about going on stage. Of course I had butterflies in the stomach, mild anxiety about doing a good job, or any other number of things that would impact how I might perform or how I might respond to an audience. Over time, and with practice, I grew to feel more comfortable with them, and I grew to adapt and take part in a more "social" and gregarious approach to those activities. Did I magically become an extrovert? I don't think so. Instead, I learned how to perform to "extrovert" expectations, and the experiences I had allowed me to shorten the feedback loop necessary to respond.
As we approach software testing, we sometimes think that personality shapes how we think and how we interact. We believe that Introverts will be more detail oriented. We believe that Extroverts will be able to sell the process better. We feel that if we can align the right people with the right roles based on their tendencies, we will be successful. The problem is, the whole introvert vs. extrovert approach is somewhat false. Without really getting into the minds and the histories of the people we are interacting with, we don't really know if they are introverts, extroverts, or any other "erts" we might want to apply. How much of it is nature? How much of it is learned? How much of it is an act? If you were to meet and talk with me, depending on the day you meet me, you might think I'm incredibly quiet and non-engaging, or you might think I never shut up! You might label me an introvert on one day, and an extrovert on another. The truth is, I don't even know where one ends and the other begins.
I do know that my emotions, my personality and my approach to situations comes in waves. It's not like the high wave is extroversion, or the low ebb is introversion. Some times when I'm really on my game, I hide from everyone to immerse in it and get the most out of it I can. Sometimes when I'm at super low energy, all I want to do is talk and discuss ideas, perhaps to draw out my own, because I don't feel them making sense to me at that moment. It's a variety and on any given day, or any given hour, it may manifest itself very differently. For me, while I think it's important to figure out where on the continuum you fall, you also need to know that where you fall on the continuum may swing like a pendulum, and so might those people you work with each day. Take the time to get to understand that, and it's much more likely you will be better attuned to understanding what they need long term, not just at the moment you are interacting with them.
Saturday, March 31, 2012
Thursday, March 29, 2012
RESTful Bundling - An #sfrails meetup followup
It's been awhile since I've been able to get out to a San Francisco Rails Group meetup, so I'm happy to be out tonight at Plum District, the company who is hosting what looks to be a very full and standing room only group (glad I got here a little early :) ). Not to be outdone, we even had a emergency alarm system go off during introductions. talk about a warm welcome!
Tonight's two talks were focused around Bundler and developing RESTful clients. Andre Arko (@indirect) led off the evening by discussing Bundler and where it came from, why it's here and where it is going. For those familiar with Ruby (and those who are not) libraries and compiled code modules are available through an interface referred to as Ruby "Gems". Trying to keep track of all of the gems needed, making sure that the exceptions didn't derail your progress, and trying to figure out which gems your code actually uses.
Before bundler, there were various ways to try to solve this problem, some of which were quite painful and frustrating (lots of "gem install" in script files, etc.). Bundler solves this problem by making sure that gems and their ordering are handled automatically, so that you are able to make sure you are running what you think you are running.
Today, Andre talked about Bunder 1.1 and some of the new features it has, such as github specific gemfiles (kinda cool :) ). You can now easily check to see if your gems are outdated. You can clean up old gems and remove the ones you are no longer using. Bundler 1.1 now supports subshell without bundler, and bundler now shows the absolute path to every gem in your bundle if you want to search for them. The big question, though, is "IS IT FASTER?!" The answer is "yes", by about a factor of four, but don't take my word for it, download it, install it, and check it out for yourself :).
Bunder 1.1 is a lot better, and they want Bundler 1.2 to be even better than that. To do that, the goal is shorter release cycles, creating ruby version check systems, and checking out local gems if set up to do so. One of the big goals is to get rid of "bundle exec". Overall, this looks cool and I'm interested in seing what we'll be offered down the road.
The second talk was focused on developing ReSTful clients, and explaining how everything fits together. Jack Lawson from Plum District talked about some of the chalenges he faced while implementing an API to build out a client, any client. These areas work for websites, mobile apps, tablets, etc. I have to admit, much of this is still fairly esoteric to me, but since I've worked through Learn Ruby the Hard Way, a lot of these conversations make a lot more sense now. I still have some holes to fill in, but they feel much less massive than in previous meetups I've attended. Building the API is just the first half of the battle. Actually getting everything to play nice is the other half. Backbone, CoffeeScript, and Mustache / Hogan.js were demonstrated so that we could see the steps necessary to progressively grow the client application. Pretty cool all told, though still a bit over my head in spots.
Tonight's two talks were focused around Bundler and developing RESTful clients. Andre Arko (@indirect) led off the evening by discussing Bundler and where it came from, why it's here and where it is going. For those familiar with Ruby (and those who are not) libraries and compiled code modules are available through an interface referred to as Ruby "Gems". Trying to keep track of all of the gems needed, making sure that the exceptions didn't derail your progress, and trying to figure out which gems your code actually uses.
Before bundler, there were various ways to try to solve this problem, some of which were quite painful and frustrating (lots of "gem install" in script files, etc.). Bundler solves this problem by making sure that gems and their ordering are handled automatically, so that you are able to make sure you are running what you think you are running.
Today, Andre talked about Bunder 1.1 and some of the new features it has, such as github specific gemfiles (kinda cool :) ). You can now easily check to see if your gems are outdated. You can clean up old gems and remove the ones you are no longer using. Bundler 1.1 now supports subshell without bundler, and bundler now shows the absolute path to every gem in your bundle if you want to search for them. The big question, though, is "IS IT FASTER?!" The answer is "yes", by about a factor of four, but don't take my word for it, download it, install it, and check it out for yourself :).
Bunder 1.1 is a lot better, and they want Bundler 1.2 to be even better than that. To do that, the goal is shorter release cycles, creating ruby version check systems, and checking out local gems if set up to do so. One of the big goals is to get rid of "bundle exec". Overall, this looks cool and I'm interested in seing what we'll be offered down the road.
The second talk was focused on developing ReSTful clients, and explaining how everything fits together. Jack Lawson from Plum District talked about some of the chalenges he faced while implementing an API to build out a client, any client. These areas work for websites, mobile apps, tablets, etc. I have to admit, much of this is still fairly esoteric to me, but since I've worked through Learn Ruby the Hard Way, a lot of these conversations make a lot more sense now. I still have some holes to fill in, but they feel much less massive than in previous meetups I've attended. Building the API is just the first half of the battle. Actually getting everything to play nice is the other half. Backbone, CoffeeScript, and Mustache / Hogan.js were demonstrated so that we could see the steps necessary to progressively grow the client application. Pretty cool all told, though still a bit over my head in spots.
I'd be remiss to not say a hearty thanks to everyone who puts on these Meetups at regular intervals. We comment often that you can't swing an umbrella in San Francisco without hitting some kind of meetup. There's a wealth of talent and opportunity here. I need to remember to not take such things for granted.
Wednesday, March 28, 2012
Procrastination Adrenaline
Hi everyone, my name is Michael, and I have a dirty secret.
I am addicted to "procrastination adrenaline".
Much of what I do with this site is discuss areas that many other testers might find they are likewise dealing with, so I feel compelled to say this. It's a big problem, but so many are afflicted with it, and it's time someone came out to talk about it.
So many of us say that we wish we could be more organized, to be better able to parcel out our time, to be more measured and deliberate, but so often, we fall back into our old habits. We know it's bad, we know it's stressful, we know that we could potentially do better work if we didn't do it. So what keeps drawing us back?
There are many answers, and when I was at the first day for STP-Con, I had a chance to talk to a bunch of people in various workshops. As we were talking about getting talks and presentations together, a lot of the people there were sighing and bashfully admitting that they did something similar. They were thinking about and mulling over ideas for weeks at a time, in many cases. Still, when it came time to building the item that needed to ship, for many, it was a just-in-time delivery or a late night in the hotel or an early morning just before the presentation kind of deal. Note, these are not tardy school children. These are accomplished professionals with many years of experience. Yet here we were, commiserating on just how often we did this "by the seat of our pants" completion of items that absolutely have to be delivered at a non negotiable time.
I've been thinking about why we do this, and I believe that for many of us, early in our development, we discovered that we got a rush out of near panic situations. If we had done this and failed miserably, it's likely we would modify our behavior and then be more consistent or plan ahead better. But I think that, for many of us, the opposite happened. It's possible that, with that surge of "procrastination adrenaline" we produced something really good, maybe even far beyond what we believed we could do, and we were praised for it. In a panic, we studied the night before for a big exam, and we aced it. The night before a paper deadline, we rapidly wrote a stream of consciousness essay, and we got an "A" for it. We made a presentation to a group where we didn't have a lot of time to prepare, and we killed it, getting high fives and handshakes all around for knocking it out of the park.
Sane people don't do things when they have proven to be ineffective. Over time they adapt and move towards the things that work. The scary thing about procrastination adrenaline addicts is that, for all intents and purposes, IT DOES WORK! This model does bring out some intense creativity. Maybe not every time. Maybe not with as much polish as something that was developed with measured and deliberate progress over time. The challenge is that the creative engine can produce items of intense brilliance in these periods, and the more frequently we are praised for doing so, the more likely we are to believe that the last minute heroics are exactly why we are creating such "mega brilliance".
How to break ourselves of this addiction? It's tough. I've tried creating artificial deadlines, but my brain knows better, so it's rarely effective. One way that does help, though, is to have an accountability partner, someone who you make a pledge to work with and deliver something at specific milestones. These milestones should be hard and fast, and your accountability partner should be someone who will hold your feet to the fire. While this may make you still do smaller doses of procrastination, they are spread out over the life of a project or deliverable, so that you have to deliver a little at a time. Another way to do it is to have a personal "daily stand up" like we do in Scrum meetings. Even if it's just yourself, take the time to say what you did yesterday, what you plan to do today, and identify any blockers in your path. It may sound weird, but talking out what you want to do and making yourself accountable to yourself can work. It's harder than having a 2nd party willing to be an accountability partner, but it's doable.
For many of us, there may be no reason to break out of our procrastination adrenaline addiction, we should just take advantage of our weird brains and work with it. If that's the case, then Merlin Mann's Procrastination Dash, which is the (10+2)*5 approach to goal management (10 minutes of focus, 2 minute break, repeated 5 times in an hour) may be exactly what's needed to bend the impulse to procrastinate at least somewhat in our favor.
Regardless of where you fall in this, know that I understand your plight, and I battle with this issue regularly. Whether or not you wean off of it completely, or just try to find ways to make it work better for you, focus on making it work for you.
Tuesday, March 27, 2012
I’ve Got a Buddy Who…
I’m being a little cagey at the moment, but I won’t have to
be for much longer .
I’ve been talking to a number of people over this past
weekend while I was in New Orleans, and some of the conversation turned to the
show “Pawn Stars”. I am not at all surprised that “Pawn Stars” is a popular
program among software testers. For
those of us who do a regular amount of testing, Pawn Stars is a great metaphor
for a lot of what we do.
For those not familiar with the show, there’s a pawnshop in
Las Vegas called the “Gold & Silver Pawn Shop”. It’s a family owned
business and three generations of the family work there. Richard Harrison (The
Old Man) is the owner of the shop. Rick Harrison, who is the elder Richard’s
son and runs the shop’s day to day operations, and Rick’s son Corey, who works
there and has hopes of running the store one day. The premise of the show is that people bring
items into the pawnshop, and the Harrison’s (and others in the store) try to
determine what they are looking at and what the potential value of the item is.
They then ask if the customer wants to pawn the item or wants to sell it. In
the event of a pawn, then a loan is made based on the value of the item. If
it’s a sale, then they negotiate for purchase. One thing is made abundantly
clear by all of the Harrisons. Their goal is to make money on every
transaction.
One of the things that makes the show interesting is the
fact that you may not have any idea what you are looking at. IS the item
legitimate? Maybe it’s a counterfeit. Or perhaps it’s a really valuable item in
some areas, but you just don’t know anything about it. Since Rick is the one
that does a lot of these interactions on the air, he admits when something is
in his scope of expertise and when something isn’t. When Rick doesn’t know how
to move on something, he almost always say “I’ve got a buddy that is an expert
in …[fillInTheBlank]”. This is the indication that Rick is calling in an expert
to check out the item, discuss what they see, what their understanding is, and
what the value of the item based on their education, experience and past
dealings with items like this informs.
We as testers go through this every day. Each of the Harrisons have different
experience levels. None of them knows everything. Sometimes Richard (the Old
Man) has gut level instincts based on years of experience, but having little to no knowledge of the
current market. Sometimes he’s right, but sometimes he’s wrong. Often Corey
sees opportunities that neither the Old Man or his dad understand, and he can
make a great deal, even when the item they are looking at may be beat up or
even missing pieces. In my world view, I know quite a bit about functional
testing and doing negative tests, but performance isn’t quite my bag (or at
least not to the level someone like Scott Barber would be). The point is, we all have our gut
feelings, our skills and our learning, but we don’t all have the 100% complete
package all the time. The good news is that I don’t have to if I have enough
buddies I can call on to get some expert opinions from time to time.
Coming up soon, there will be something where I can
guarantee I will need to call in on some of those “I’ve got a buddy who…”
situations. I’m hoping y’all are receptive and willing to help “close some
deals” :).
Monday, March 26, 2012
Somewhat Live from #STPCon
Good morning everyone! Today is another experiment in Live Blogging, so please realize this may be messy, it may be chaotic, and it may take awhile for it to make sense.
Today I'm reporting live from the Sheraton New Orleans Hotel. While I'm not going to be able to attend the entire conference, I had the opportunity to be here for the "pre-game show", which is where a number of participants put in some solid time (not to mention a lot of effort on the part of presenters) to put on workshops and tutorials for conference attendees.
We hear a lot about the track talks, keynotes and special events. Rarely do we get the opportunity to hear about the tutorials and conferences, and if we do, they are usually commented on by a single attendee several days later. My goal over the next several hours is to get in, examine the tutorials and workshops, and give you a feeling for each of them and the content they are covering.
Having just come off the POST workshop last week, I was curious to see what Lynn McKee and Nancy Kelln would have up their sleeves for today. Lynn and Nancy are, as some may know but many may not, almost always joined at the hip. You will rarely see one without the other, and this dynamic duo that I refer to as the "Calgary Wonder Twins", they are presenting today on the details that do on behind the scenes at any given testing project. The fact is, we often have great intentions, and we believe we can offer a tremendous value. However, behind the scenes of each project are the moving parts that put or testing performance either in a great light or completely hidden.
We struggle with expectations, estimates that may or may not be accurate, we struggle with the value (or lack thereof) of metric, and if they are of value or if they are worthless, and in all cases, unless we are selling testing services, software testing does not make money for the organization. Save it? A great possibility. Safeguard it? Even more likely. Still, the simple fact is, we are an add on to the essential business in many people's eyes. The code is what makes the money, but the quality of the code could have a tremendous impact on the ability of that code to actually make money (or reputation, or add value; remember, not everyone writes code for monetary reasons). Lynn and Nancy will spend the better part of today giving the attendees the chance to consider these areas and work through considerations to get their hands dirty with doing the stage work necessary. We'll check in with our stage crew a little later to see how they are doing :).
A couple of rooms over, Doug Hoffman has a full house covering the topic of Exploratory Test Automation. For many, automation has some great benefits, but it also has a number of disadvantages. Automation is great at static tasks. It's also great for doing repetitive set up and take down changes. IT can confirm, it can state if something exists or if it doesn't, it can give a log about what appears on the screen. Automation can see, but it can't feel anything. Automation doesn't have emotions, it doesn't have gut feelings, it can't detect a "code smell". That's up to humans that use a very specific oracle... the human brain. We suffer, however from the fact that most automation doesn't give us the opportunity to inspect and consider a broad range of adaptable opportunities.
To do that, we need exploration skills, and the ability to ask probing questions based on what we learn at any given moment. Humans are remarkably unreliable, slow, and inconsistent, but we have the ability to make broad mental leaps to consider issue and challenges. Automation is very fast, very reliable, can be very consistent, but it's incredibly stupid. I mean stupid compared to the ability of the human brain to make deductions and consider new avenues. Exploratory Test Automation seeks to bridge that gap. There are limits to how effective this is, but there are processes that allow testers to do this in a limited capacity. We can create multiple paths to examine. We can set up in advance multiple oracles to utilize and examine lots of criteria. We can use a lot of random, pre selected data in varying scenarios, and we can tie these pieces together to get a more complete picture than w can with a static and single focusing automated test. How do we effectively do this? Tune in a little later, and I will hopefully have some answers :).
I took a step back to see what Bob Galen was saying about Agile Testing and, more important, how to effectively lead Agile teams. Teams are the ones that design and determine whether or not a project can be effectively organized and run. Agile teams have the ability to shape projects rapidly, to create products and services that can be modified rapidly and pivot to focus on changing requirements. In some environments, especially when hardware development or embedded systems have to be created, it's a stretch to apply some of these techniques, but they have a value and can be applied. Bob opened the session by having everyone post questions and opportunities for the discussion on a white board. the questions from the team are driving the initial discussion, so that they can focus on understanding the challenges for the group in attendance.
Some of the questions are unique to specific contexts, but many of the questions asked can span multiple disciplines, businesses and even development models. Whenever we get a group of testers together, we get a lot of "variations on a theme". We may think we have unique requirements, but in the big picture view, our needs are not so different from one another after all. What often changes are the specific details for the personality of the given team or product.
Anne-Marie Charrett was one of the first people that I have the opportunity to interact with during my very first AST BBST Foundations course a couple of years ago. Much of my initial feedback and extended commentary on my strengths and areas I could potentially modify and improve came from her. I consider her a great representative of the community, and was excited when I saw that her career development workshop was being offered at STPcon.
For those who are not familiar with Anne-Marie, she has an expertise in coaching testers. When we hear about coaching testers, we often consider that to be James Bach's or Michael Bolton's domain, but Anne-Marie is also a solid contributor to that area (she does private sessions with testers willing to work through challenges and get direct feedback and she's super at this). For this session, Anne-Marie and Fiona have combined forces to give testers an opportunity to consider their career opportunities and where they might want to go with their carer. Sometimes we get bogged down into the details of our every day lives, and over time, we get lost in the weeds. There's an old yarn that goes "meet the new boss... same as the old boss". Well, the same can be said for testers when they look for career changes or step into different companies. "It's totally different... just like the last testing gig I had".
The fact is, in today's work world, we have more opportunities to take our career into our own hands. For millennia, there were often barriers to being mobile and dynamic in career choices. If you wanted to farm, you had to have land. to be an artisan, you needed to have tools. To be a blacksmith, you have to have a forge. To create a product for many people, you needed a factory. The barriers and challenges of changing were that the means of production were expensive and frequently unportable. Today, with the confluence of the web and computing power and devices, anyone with a laptop and a smart phone has the ability to carry their "factory" anywhere. In this brave new and weird world, the ability to produce and create your product has never been closer. The up side is that we can be as dynamic and as portable as we would like to be. The downside is that the ability to manage those opportunities.
Anne-Marie and Fiona focused on different career areas to consider. testers have the ability to be individual contributors (software testers), leading teams (test managers), taking their options on the road (test consultants), or helping others develop their skills (coach and trainer). While there are many opportunities in just those areas, there are also many other avenues to consider and examine (writing is a cool opportunity, if I do say so myself). Matt Heusser just finished talking about making the move from individual contributor to roving test consultant, and the rugged realities one faces when making those changes. We've talked about this many times, but it's always interesting to see the perspectives that other people's questions help bring to that discussion. Don't you wish you were here to hear that ;)?
I decoded to jump back to Lynn and Nancy to pick up and see where they were at with regards to the behind the scenes details of testing projects. I came in on a exercise related to communication and discovering the explicit and hidden motives of teams. If I don't consider the fact that there are hidden motives, not only am I being somewhat naive, but I'm also missing a huge component of any project. When we act as advocates for testing teams and processes, we are much more likely to be able to communicate the benefits of our approaches and our methods if instead of hoping we are addressing our up front expectations, instead look to the hidden agendas each group has. the core to this process is communication and getting feedback.
Note, we can't just say we want feedback and blissfully stand around thinking it isn't going to come. Oh believe me, we'll get it if we really want it. the key is we have to be ready, willing and able to take the critiques we will receive. When we show we are ready and willing to react to and incorporate the feedback we are given, then something interesting happens. the stakeholders will trust us with more and more of the "hidden realities". Are we show we are able to react to feedback, we will also be able to offer our own feedback and have it be more "personal". Again, it's the hidden world of motives, needs and insecurities that really shape projects. Being willing to adapt and modify the approach based on what you learn and can incorporate will help tremendously with developing a rapport with "the stage crew".
Let's check back in with Doug and see where we stand with this wild and woolly world of Exploratory Test Automation. As many people already know, there is never a truly repeatable test. that may sound counter intuitive, but it's true. The system state of any two machines will never be 100% the same, even if they theoretically have exactly the same hardware, software, driver configuration and environment variables. There are enough variations in even running two exact replicas of virtual machines that, after just a handful of tests, there is no way to guarantee that the environments will be 100% in sync. This isn't such a big deal in most cases, but yes, even minute voltage differences can make a difference in your results.
This isn't meant to be a downer, it's meant to show all of the aspects that we have to keep in mind when we create what is, on the surface, repeatable tests. To be able to bring an exploration level to these tests that is meaningful, there is a need for a model that takes a lot of things into consideration. Our test inputs, precondition data, precondition program state and environmental aspects all need to be considered and, ideally, part of our testing criteria. If that seems like a lot of stuff to have to maintain, you are right. It's also only half of the story. We also need to incorporate and express decisions on test results, post condition data, post condition program state and the results of our environment after our tests. Looking at al of these conditions and considering them is a lot of stuff to keep track of, but it's not impossible. The reason is, we do it all the time. Our "master oracle", i.e. our brain, somewhat does this for us every time we test and look oer the system. we look at these things and we work with "test smell" and our testers feeling" to determine if we have something to be concerned about.
Over in Anne Marie's talk, Jon Bach is talking about the test manager relationship with the team, though his role has changed somewhat since he got to eBay. One of the things that Jon recommended when it came to do a one on one with his testers was to have it be in a two-pronged method. The first is the casual one on one in a conference room, but the second is at their workstation, where he can observe and see what they are doing and how they do it. this is not meant to be an "a-ha" or "gotcha",but more to see if there are areas that they could use some coaching and more important, to see if they were actually able to be coached. I think this really important, not just in the sense that we get to see what can be improved, but that we can also learn hat they are doing that is exceptionally interesting or adds significant value to the team.
Many testers live in isolation, especially those of us who are Lone Wolf's. Having someone sit with you, see what you really do, and offer feedback based on the way that you actually do things can be exceptionally helpful. Test managers are not overlords, at least the good one's aren't. The good ones are servant leaders, who look to see if they can help their teams and are working to solve authentic problems. Jon focused on the ability to help his team mates develop their reputations as testers. We have a lot of what-ifs that we need to explore, and the ability to help focus on problems that, in a manner of speaking, help the developers really appreciate the efforts that the testers are putting forth.
Jon used an amusing mimic of Marlon Brando's Vito Corleone saying "I have provided a favor... know that some day, I may ask for a favor in return". It's not literally what he means, but he's right. By creating credibility with the development team based on our efforts, we are able to develop a reputation that will allow us to call in on those times when we really need help from developers. When we work and focus on their most pressing issues and develop a reputation for doing solid work, we will be able to be in a position to get that help when we need it.
Following lunch, I came back in to see what Bob Galen's group were discussing, and had a chance to discuss how various teams approach getting over the hurdles of Agile implementation and how to "sell the drama" to others. there has been a spirited discussion about how to deal with dysfunctional organizations. In some ways, we have a limited ability to impact this without using our strongest weapon, which is our feet. If we don't stand up for what we need to and believe, then there is no real benefit or reason for a team to change or to adapt. This reminds me of Jerry Weinberg's quote (paraphrased) where "we can change our organization or we can change our organization".that van feel like a rough statement, but in some ways, to quote Nick Lowe, "You've Gotta' Be Cruel to be Kind".
In this case, we may need to be willing to step out of a toxic environment, and encourage others to do the same. If enough people do it, one of two things will happen. The organization will ultimately change to address the criticisms, or adapt to the new reality. Another col discussion point was that the development team could only work on three things at a time. This was meant to address the situation where the testers weren't feeling they were able to communicate with their development team. When the teams are only "allowed" to work on three items at a time, then that dries collaboration and getting the exams to focus on working with each other. I liked this message and appreciated the idea. The implementation, well, that might take awhile.
So what's Doug's group up to? I walked in to see a discussion going on about the challenges surrounding oracles and the way that oracles are implemented. With Exploratory test automation, you are not just dealing with a single oracle, but perhaps several (or maybe even dozens). There's a lot of costs associated with having multiple oracles. These costs are not just money to pay for something, but also opportunity costs and opening to the potential for litigation. Oracles can range anywhere from "No oracle at all" to dealing with multiple oracles. Remember, an oracle is something that allows a user to say "this is working well" to "now wait, that can't be right!" An oracle is what basically tells you that your program is not doing what it says it is doing. When you run tests without an oracle, how can you tell that the application is failing? Or passing, for that matter? Even simple oracles can be effective. Without them, you may not be able to tell anything but the most spectacular potential failures has happened.
I saw Adam Goucher walk in during lunch, and as I was curious regarding his recent tweets about his "How to be Awesome at Testing" talk and slides, when I found out he was going to be presenting this talk during Anne Marie's workshop, I had to see it in person. I'm glad I did. The key takeaway from his talk, if you must have just one takeaway, is that "being amazing" is the end goal. Being a tester, ignore 90% of the standard criteria. Your customer doesn't care if your product meets standards, they care if it's fantastic! Quality is important, but if you have to make a quality shortcut to make a customer happy, then do it! It may go against the grain, it may be different than what you envisioned, it may not even meet your standards of what should go out, but at the end of the day, your customer being happy with what you do is the key thing. Neat perspective :).
The chalenge of context switching today has been tremendous, there are so many details I am leaving out, partly because I want to preserve some of the content for the attendees (they did pay for it after all ;) ), but also, I wanted to see these workshops from an experiential level, and that's what each of these workshops do. They are not just a day long tutorial, they are a chance to get your hands dirty. They allow the participants to actually get into the guts of an idea, which is nearly impossible in an hour long track session. These are intimate, they are a chance to explore not just your ideas, but other people's ideas. Instead of the speaker's context, you learn more about the other attendees contexts, and those help inform your own world view. My thanks to the crew at Software Test Professionals for giving me the opportunity to participate today. I found these sessions to be very valuable and I learned a great deal, even with my bouncing in and out of the sessions. Shortly, I'll be saying farewell to the city of New Orleans, and I'll be gone 2300 miles and probably 7 more hour until my day is done (sorry, I had to do that).
Today I'm reporting live from the Sheraton New Orleans Hotel. While I'm not going to be able to attend the entire conference, I had the opportunity to be here for the "pre-game show", which is where a number of participants put in some solid time (not to mention a lot of effort on the part of presenters) to put on workshops and tutorials for conference attendees.
We hear a lot about the track talks, keynotes and special events. Rarely do we get the opportunity to hear about the tutorials and conferences, and if we do, they are usually commented on by a single attendee several days later. My goal over the next several hours is to get in, examine the tutorials and workshops, and give you a feeling for each of them and the content they are covering.
Having just come off the POST workshop last week, I was curious to see what Lynn McKee and Nancy Kelln would have up their sleeves for today. Lynn and Nancy are, as some may know but many may not, almost always joined at the hip. You will rarely see one without the other, and this dynamic duo that I refer to as the "Calgary Wonder Twins", they are presenting today on the details that do on behind the scenes at any given testing project. The fact is, we often have great intentions, and we believe we can offer a tremendous value. However, behind the scenes of each project are the moving parts that put or testing performance either in a great light or completely hidden.
We struggle with expectations, estimates that may or may not be accurate, we struggle with the value (or lack thereof) of metric, and if they are of value or if they are worthless, and in all cases, unless we are selling testing services, software testing does not make money for the organization. Save it? A great possibility. Safeguard it? Even more likely. Still, the simple fact is, we are an add on to the essential business in many people's eyes. The code is what makes the money, but the quality of the code could have a tremendous impact on the ability of that code to actually make money (or reputation, or add value; remember, not everyone writes code for monetary reasons). Lynn and Nancy will spend the better part of today giving the attendees the chance to consider these areas and work through considerations to get their hands dirty with doing the stage work necessary. We'll check in with our stage crew a little later to see how they are doing :).
A couple of rooms over, Doug Hoffman has a full house covering the topic of Exploratory Test Automation. For many, automation has some great benefits, but it also has a number of disadvantages. Automation is great at static tasks. It's also great for doing repetitive set up and take down changes. IT can confirm, it can state if something exists or if it doesn't, it can give a log about what appears on the screen. Automation can see, but it can't feel anything. Automation doesn't have emotions, it doesn't have gut feelings, it can't detect a "code smell". That's up to humans that use a very specific oracle... the human brain. We suffer, however from the fact that most automation doesn't give us the opportunity to inspect and consider a broad range of adaptable opportunities.
To do that, we need exploration skills, and the ability to ask probing questions based on what we learn at any given moment. Humans are remarkably unreliable, slow, and inconsistent, but we have the ability to make broad mental leaps to consider issue and challenges. Automation is very fast, very reliable, can be very consistent, but it's incredibly stupid. I mean stupid compared to the ability of the human brain to make deductions and consider new avenues. Exploratory Test Automation seeks to bridge that gap. There are limits to how effective this is, but there are processes that allow testers to do this in a limited capacity. We can create multiple paths to examine. We can set up in advance multiple oracles to utilize and examine lots of criteria. We can use a lot of random, pre selected data in varying scenarios, and we can tie these pieces together to get a more complete picture than w can with a static and single focusing automated test. How do we effectively do this? Tune in a little later, and I will hopefully have some answers :).
I took a step back to see what Bob Galen was saying about Agile Testing and, more important, how to effectively lead Agile teams. Teams are the ones that design and determine whether or not a project can be effectively organized and run. Agile teams have the ability to shape projects rapidly, to create products and services that can be modified rapidly and pivot to focus on changing requirements. In some environments, especially when hardware development or embedded systems have to be created, it's a stretch to apply some of these techniques, but they have a value and can be applied. Bob opened the session by having everyone post questions and opportunities for the discussion on a white board. the questions from the team are driving the initial discussion, so that they can focus on understanding the challenges for the group in attendance.
Some of the questions are unique to specific contexts, but many of the questions asked can span multiple disciplines, businesses and even development models. Whenever we get a group of testers together, we get a lot of "variations on a theme". We may think we have unique requirements, but in the big picture view, our needs are not so different from one another after all. What often changes are the specific details for the personality of the given team or product.
Anne-Marie Charrett was one of the first people that I have the opportunity to interact with during my very first AST BBST Foundations course a couple of years ago. Much of my initial feedback and extended commentary on my strengths and areas I could potentially modify and improve came from her. I consider her a great representative of the community, and was excited when I saw that her career development workshop was being offered at STPcon.
For those who are not familiar with Anne-Marie, she has an expertise in coaching testers. When we hear about coaching testers, we often consider that to be James Bach's or Michael Bolton's domain, but Anne-Marie is also a solid contributor to that area (she does private sessions with testers willing to work through challenges and get direct feedback and she's super at this). For this session, Anne-Marie and Fiona have combined forces to give testers an opportunity to consider their career opportunities and where they might want to go with their carer. Sometimes we get bogged down into the details of our every day lives, and over time, we get lost in the weeds. There's an old yarn that goes "meet the new boss... same as the old boss". Well, the same can be said for testers when they look for career changes or step into different companies. "It's totally different... just like the last testing gig I had".
The fact is, in today's work world, we have more opportunities to take our career into our own hands. For millennia, there were often barriers to being mobile and dynamic in career choices. If you wanted to farm, you had to have land. to be an artisan, you needed to have tools. To be a blacksmith, you have to have a forge. To create a product for many people, you needed a factory. The barriers and challenges of changing were that the means of production were expensive and frequently unportable. Today, with the confluence of the web and computing power and devices, anyone with a laptop and a smart phone has the ability to carry their "factory" anywhere. In this brave new and weird world, the ability to produce and create your product has never been closer. The up side is that we can be as dynamic and as portable as we would like to be. The downside is that the ability to manage those opportunities.
Anne-Marie and Fiona focused on different career areas to consider. testers have the ability to be individual contributors (software testers), leading teams (test managers), taking their options on the road (test consultants), or helping others develop their skills (coach and trainer). While there are many opportunities in just those areas, there are also many other avenues to consider and examine (writing is a cool opportunity, if I do say so myself). Matt Heusser just finished talking about making the move from individual contributor to roving test consultant, and the rugged realities one faces when making those changes. We've talked about this many times, but it's always interesting to see the perspectives that other people's questions help bring to that discussion. Don't you wish you were here to hear that ;)?
I decoded to jump back to Lynn and Nancy to pick up and see where they were at with regards to the behind the scenes details of testing projects. I came in on a exercise related to communication and discovering the explicit and hidden motives of teams. If I don't consider the fact that there are hidden motives, not only am I being somewhat naive, but I'm also missing a huge component of any project. When we act as advocates for testing teams and processes, we are much more likely to be able to communicate the benefits of our approaches and our methods if instead of hoping we are addressing our up front expectations, instead look to the hidden agendas each group has. the core to this process is communication and getting feedback.
Note, we can't just say we want feedback and blissfully stand around thinking it isn't going to come. Oh believe me, we'll get it if we really want it. the key is we have to be ready, willing and able to take the critiques we will receive. When we show we are ready and willing to react to and incorporate the feedback we are given, then something interesting happens. the stakeholders will trust us with more and more of the "hidden realities". Are we show we are able to react to feedback, we will also be able to offer our own feedback and have it be more "personal". Again, it's the hidden world of motives, needs and insecurities that really shape projects. Being willing to adapt and modify the approach based on what you learn and can incorporate will help tremendously with developing a rapport with "the stage crew".
Let's check back in with Doug and see where we stand with this wild and woolly world of Exploratory Test Automation. As many people already know, there is never a truly repeatable test. that may sound counter intuitive, but it's true. The system state of any two machines will never be 100% the same, even if they theoretically have exactly the same hardware, software, driver configuration and environment variables. There are enough variations in even running two exact replicas of virtual machines that, after just a handful of tests, there is no way to guarantee that the environments will be 100% in sync. This isn't such a big deal in most cases, but yes, even minute voltage differences can make a difference in your results.
This isn't meant to be a downer, it's meant to show all of the aspects that we have to keep in mind when we create what is, on the surface, repeatable tests. To be able to bring an exploration level to these tests that is meaningful, there is a need for a model that takes a lot of things into consideration. Our test inputs, precondition data, precondition program state and environmental aspects all need to be considered and, ideally, part of our testing criteria. If that seems like a lot of stuff to have to maintain, you are right. It's also only half of the story. We also need to incorporate and express decisions on test results, post condition data, post condition program state and the results of our environment after our tests. Looking at al of these conditions and considering them is a lot of stuff to keep track of, but it's not impossible. The reason is, we do it all the time. Our "master oracle", i.e. our brain, somewhat does this for us every time we test and look oer the system. we look at these things and we work with "test smell" and our testers feeling" to determine if we have something to be concerned about.
Over in Anne Marie's talk, Jon Bach is talking about the test manager relationship with the team, though his role has changed somewhat since he got to eBay. One of the things that Jon recommended when it came to do a one on one with his testers was to have it be in a two-pronged method. The first is the casual one on one in a conference room, but the second is at their workstation, where he can observe and see what they are doing and how they do it. this is not meant to be an "a-ha" or "gotcha",but more to see if there are areas that they could use some coaching and more important, to see if they were actually able to be coached. I think this really important, not just in the sense that we get to see what can be improved, but that we can also learn hat they are doing that is exceptionally interesting or adds significant value to the team.
Many testers live in isolation, especially those of us who are Lone Wolf's. Having someone sit with you, see what you really do, and offer feedback based on the way that you actually do things can be exceptionally helpful. Test managers are not overlords, at least the good one's aren't. The good ones are servant leaders, who look to see if they can help their teams and are working to solve authentic problems. Jon focused on the ability to help his team mates develop their reputations as testers. We have a lot of what-ifs that we need to explore, and the ability to help focus on problems that, in a manner of speaking, help the developers really appreciate the efforts that the testers are putting forth.
Jon used an amusing mimic of Marlon Brando's Vito Corleone saying "I have provided a favor... know that some day, I may ask for a favor in return". It's not literally what he means, but he's right. By creating credibility with the development team based on our efforts, we are able to develop a reputation that will allow us to call in on those times when we really need help from developers. When we work and focus on their most pressing issues and develop a reputation for doing solid work, we will be able to be in a position to get that help when we need it.
Following lunch, I came back in to see what Bob Galen's group were discussing, and had a chance to discuss how various teams approach getting over the hurdles of Agile implementation and how to "sell the drama" to others. there has been a spirited discussion about how to deal with dysfunctional organizations. In some ways, we have a limited ability to impact this without using our strongest weapon, which is our feet. If we don't stand up for what we need to and believe, then there is no real benefit or reason for a team to change or to adapt. This reminds me of Jerry Weinberg's quote (paraphrased) where "we can change our organization or we can change our organization".that van feel like a rough statement, but in some ways, to quote Nick Lowe, "You've Gotta' Be Cruel to be Kind".
In this case, we may need to be willing to step out of a toxic environment, and encourage others to do the same. If enough people do it, one of two things will happen. The organization will ultimately change to address the criticisms, or adapt to the new reality. Another col discussion point was that the development team could only work on three things at a time. This was meant to address the situation where the testers weren't feeling they were able to communicate with their development team. When the teams are only "allowed" to work on three items at a time, then that dries collaboration and getting the exams to focus on working with each other. I liked this message and appreciated the idea. The implementation, well, that might take awhile.
So what's Doug's group up to? I walked in to see a discussion going on about the challenges surrounding oracles and the way that oracles are implemented. With Exploratory test automation, you are not just dealing with a single oracle, but perhaps several (or maybe even dozens). There's a lot of costs associated with having multiple oracles. These costs are not just money to pay for something, but also opportunity costs and opening to the potential for litigation. Oracles can range anywhere from "No oracle at all" to dealing with multiple oracles. Remember, an oracle is something that allows a user to say "this is working well" to "now wait, that can't be right!" An oracle is what basically tells you that your program is not doing what it says it is doing. When you run tests without an oracle, how can you tell that the application is failing? Or passing, for that matter? Even simple oracles can be effective. Without them, you may not be able to tell anything but the most spectacular potential failures has happened.
I saw Adam Goucher walk in during lunch, and as I was curious regarding his recent tweets about his "How to be Awesome at Testing" talk and slides, when I found out he was going to be presenting this talk during Anne Marie's workshop, I had to see it in person. I'm glad I did. The key takeaway from his talk, if you must have just one takeaway, is that "being amazing" is the end goal. Being a tester, ignore 90% of the standard criteria. Your customer doesn't care if your product meets standards, they care if it's fantastic! Quality is important, but if you have to make a quality shortcut to make a customer happy, then do it! It may go against the grain, it may be different than what you envisioned, it may not even meet your standards of what should go out, but at the end of the day, your customer being happy with what you do is the key thing. Neat perspective :).
The chalenge of context switching today has been tremendous, there are so many details I am leaving out, partly because I want to preserve some of the content for the attendees (they did pay for it after all ;) ), but also, I wanted to see these workshops from an experiential level, and that's what each of these workshops do. They are not just a day long tutorial, they are a chance to get your hands dirty. They allow the participants to actually get into the guts of an idea, which is nearly impossible in an hour long track session. These are intimate, they are a chance to explore not just your ideas, but other people's ideas. Instead of the speaker's context, you learn more about the other attendees contexts, and those help inform your own world view. My thanks to the crew at Software Test Professionals for giving me the opportunity to participate today. I found these sessions to be very valuable and I learned a great deal, even with my bouncing in and out of the sessions. Shortly, I'll be saying farewell to the city of New Orleans, and I'll be gone 2300 miles and probably 7 more hour until my day is done (sorry, I had to do that).
Saturday, March 24, 2012
The Ripple Effect
As I sat at the airport in San Francisco yesterday, waiting for a flight to take me to New Orleans by way of Houston, I noticed that the plane that we were supposed to be taking was just letting people off at the time that we were supposed to start boarding. I can live with an hour delay when I have a straight flight, but it always makes me nervous whenever I have another flight to catch. Sure enough, that delay stretched into an hour delay for our plane's departure, and that hour delay made it impossible to catch my connecting flight.
What follows is the wonderful comedy of errors that tries to get us onto other flights, the marvelous and completely aggravating "standby" system, especially when the plane that's supposed to be used for the flight has to be changed and downgraded to a smaller plane, making it nearly impossible for anyone to fly standby. The net result was that the only guaranteed flight I could book was the next morning. I'm now in the boarding area waiting for that flight to board, after having spent the night lying on the floor at the Houston Intercontinental Airport. By all accounts, it looks to be on time.
You might stop to think that I'm complaining about all of this. In a way, sure, I'm bummed that an issue with a control system put a plane in a holding pattern that triggered a chain reaction of events that has now caused me to be almost 12 hours delayed from arriving where I needed to be when I needed to be there. At the same time, I step back and think how amazing it is that these systems that we take for granted every day are really quite fragile, and they are really designed so that there is almost no slack in the system. When things go right, it's smooth and painless, relatively speaking, and we think little about all that goes on. It only takes a slight hiccup, though, to completely throw out of whack our plans and our approaches.
The most frustrating aspects of these delays is that, so often, we get no information about these delays, and we only find out our situation when it's to late to do anything about it. What could I have done? Once I was on the tarmac, I couldn't have gotten off the plane, but I could have at least had a shot at trying to see if I could book a later flight. That becomes much less likely when you are running across the terminal, desperately trying to catch a flight you just missed by 8 minutes, and then trying to get onto another flight, where the chance of any slack in the system has been totally used up or worse, caused to be delayed because of the ripple effect of another delay. That's the real lesson in a lot of this. I can deal with having to sleep in an airport terminal, that doesn't bother me. I can deal with having to take a later flight. It's a bummer that the business I need to conduct will be delayed by a few hours, but overall, I can communicate that. It's when the communication from those impeding our progress or ability to make a decision don't talk to us or share what's going on that it gets frustrating.
Bad news never gets better as time progresses. The blow won't be softened with time. If a delay is imminent, please let us know as soon as you know. If it's bad news, we can take it. If it's a delay, we can adjust, but we can't do much if we don't know what's going on.
What follows is the wonderful comedy of errors that tries to get us onto other flights, the marvelous and completely aggravating "standby" system, especially when the plane that's supposed to be used for the flight has to be changed and downgraded to a smaller plane, making it nearly impossible for anyone to fly standby. The net result was that the only guaranteed flight I could book was the next morning. I'm now in the boarding area waiting for that flight to board, after having spent the night lying on the floor at the Houston Intercontinental Airport. By all accounts, it looks to be on time.
You might stop to think that I'm complaining about all of this. In a way, sure, I'm bummed that an issue with a control system put a plane in a holding pattern that triggered a chain reaction of events that has now caused me to be almost 12 hours delayed from arriving where I needed to be when I needed to be there. At the same time, I step back and think how amazing it is that these systems that we take for granted every day are really quite fragile, and they are really designed so that there is almost no slack in the system. When things go right, it's smooth and painless, relatively speaking, and we think little about all that goes on. It only takes a slight hiccup, though, to completely throw out of whack our plans and our approaches.
The most frustrating aspects of these delays is that, so often, we get no information about these delays, and we only find out our situation when it's to late to do anything about it. What could I have done? Once I was on the tarmac, I couldn't have gotten off the plane, but I could have at least had a shot at trying to see if I could book a later flight. That becomes much less likely when you are running across the terminal, desperately trying to catch a flight you just missed by 8 minutes, and then trying to get onto another flight, where the chance of any slack in the system has been totally used up or worse, caused to be delayed because of the ripple effect of another delay. That's the real lesson in a lot of this. I can deal with having to sleep in an airport terminal, that doesn't bother me. I can deal with having to take a later flight. It's a bummer that the business I need to conduct will be delayed by a few hours, but overall, I can communicate that. It's when the communication from those impeding our progress or ability to make a decision don't talk to us or share what's going on that it gets frustrating.
Bad news never gets better as time progresses. The blow won't be softened with time. If a delay is imminent, please let us know as soon as you know. If it's bad news, we can take it. If it's a delay, we can adjust, but we can't do much if we don't know what's going on.
Friday, March 23, 2012
Getting It Done, By Any Means Necessary
Aaron Scott's latest Two Leaf Clover script just happened to hit right at a time when I was dealing with a big blow-up, so I found this especially amusing today.
Often, we have the time to do things the right way, the clean way, the methodical way. I appreciate when I get those times and I try my best to leverage them appropriately. However, there are just some times where "ugly" has to do. I don't mean "ugly" in the physical sense, I mean that there are just times where you don't get the opportunity to put something together that would be elegant. Instead, you need to get results, you need them very fast, and you deal with the fact that the inelegant solution will get you over the finish line in just enough time with just what you need. Later on, you can refactor and make it pretty, but for right now, you just want something that works, makes the case, and gets you where you really need to be in a short amount of time.
Often, I find that I have a quick need to check and parse out if pages are displaying something. I could eyeball the pages myself, or I could use cURL to download and examine the pages automatically. There's a lot of potential links, and while in a perfect world, I'd be able to important variables parse for values, split them out and then use them as looping criteria, sometimes you just say "look ten pages deep, go through 100 sources, and check 2000 links, and do it before lunch". In those cases, the quick and dirty beauty of the command line shines through. It's times like this where I look at UNIX/Linux, smile and say "my stars, how I've missed you!" It's often not elegant, glamorous or even pretty, but when it gets the job done, I will not complain :).
Often, we have the time to do things the right way, the clean way, the methodical way. I appreciate when I get those times and I try my best to leverage them appropriately. However, there are just some times where "ugly" has to do. I don't mean "ugly" in the physical sense, I mean that there are just times where you don't get the opportunity to put something together that would be elegant. Instead, you need to get results, you need them very fast, and you deal with the fact that the inelegant solution will get you over the finish line in just enough time with just what you need. Later on, you can refactor and make it pretty, but for right now, you just want something that works, makes the case, and gets you where you really need to be in a short amount of time.
Often, I find that I have a quick need to check and parse out if pages are displaying something. I could eyeball the pages myself, or I could use cURL to download and examine the pages automatically. There's a lot of potential links, and while in a perfect world, I'd be able to important variables parse for values, split them out and then use them as looping criteria, sometimes you just say "look ten pages deep, go through 100 sources, and check 2000 links, and do it before lunch". In those cases, the quick and dirty beauty of the command line shines through. It's times like this where I look at UNIX/Linux, smile and say "my stars, how I've missed you!" It's often not elegant, glamorous or even pretty, but when it gets the job done, I will not complain :).
Testing as a Service? A Post-POST Post
I had a great opportunity to participate in the Calgary Perspectives on Software Testing (POST) workshop last weekend. Since it's taken me a few days to get caught up with reality upon my return home, it's also given me some time to reflect on the discussions and ideas that we all shared while we were there.
This two day workshop was run in what is commonly referred to as the "LAWST" style of discussion and debate. For those not familiar with LAWST, it stands for the "Los Altos Workshop on Software Testing" and the Facilitation method has become a practice used by a number of organizations to help foster debate and move the discussion forward. It was as a facilitator that I came to POST and so it is as a facilitator that much of the information, discussions and takeaways are going to be filtered. I suspect that others who attended will fill in the blanks over the coming days and share their perspectives as well.
What also made this workshop interesting was that it was attended by several people that I recognize as some of "the names" in the testing community. The workshop was organized by Lynn McKee and Nancy Kelln, who I affectionately refer to as the "Wonder Twins" of software testing (meaning it's rare to see one without the other, anywhere). Fiona Charles and Janet Gregory were also in attendance. In addition, a variety of software testers from various industries within and around Calgary were present. I was the "Lone Yankee" in the group, so this gave me not just a glimpse into the topic we were discussing, but also the state of software testing in a different country and context than I was used to.
The topic of "Testing as a Service" is an interesting one, and it's one that I've both used, refined and struggled with over the years. There was spirited debate around this topic and the various angles that the speakers took on this. The conversation ranged from defining exactly what was meant by service, whether or not testing be regarded as a service meant that we would always be seen and treated as an "other" entity, and an especially lively debate regarding introverts and extroverts and their roles in the testing process (that one deserves its own blog post, and rest assured, it will be getting one :) ).
My initial take on this is to focus on the aspects of what we mean by "service". In one way, it's very noble. We are indeed providing a resource to a development team and a customer base. The level of commitment and involvement we put into it is a big determinant in what and how we perform, and how well our projects perform in the field. There's also a hidden aspect of the word service that we discussed at great length. That is the fact that, when we describe what we do as a service, we are, effectively, making our involvement to be "other" to the development team. We frequently questioned whether or not "software testing as a service" should stand alone from "software development as a service" or "system operations as a service". The point was made a few times that in some industries in Calgary, the service offerings were very literal, where companies could often decline to even engage software testing for their service aspect.
It has been my own experience that this is often seen as a semantic distinction, but it's a real one. Some might put it down to a choice of words, but it really does run deeper than that. For most software testers, unless you come from a strong programming background, the goals of being 100% integrated with a development team is probably not going to happen. Even in the most Agile of work places, testing is a cost of doing business, and it's an important one. While we are often integrated into development teams, that integration is rarely seamless (from my observation, anyway). In my own work environment, though I have the ability to talk with the programmers and discuss issues or seek out new areas to test, ultimately, I am a somewhat external service, and any time I'm not actively testing is a cost to the team. Many practices work into the ways that we test and collaborate, and we often debated the usefulness of saying that we were there to "serve in collaboration with the programmers".
Another point that was often brought up and clarified was the fact that, in many Agile teams, the term software developer goes well beyond those people who program. Programmers and testers are both software developers in that we are all responsible for making sure that the software grows, is tested and is of as high a quality as possible when we release it to the public. I personally find I can relate to that idea, and in some ways, it helps move the conversation along and gives us all opportunities to participate more directly with the programming team. In my case, though, I am ultimately seen as the last defense between a bug being spotted internally versus out in production. That doesn't change, and ultimately, that's the role I play on the teams I participate with. My service is that of making sure that I apply my wiles and my crafty nature to question software and see how well it stands up to scrutiny. Ultimately, if I don't do that well, then my service is of lesser value. Simple as that.
Expect to see lots more recollections and ideas from POST coming in the next few days.
This two day workshop was run in what is commonly referred to as the "LAWST" style of discussion and debate. For those not familiar with LAWST, it stands for the "Los Altos Workshop on Software Testing" and the Facilitation method has become a practice used by a number of organizations to help foster debate and move the discussion forward. It was as a facilitator that I came to POST and so it is as a facilitator that much of the information, discussions and takeaways are going to be filtered. I suspect that others who attended will fill in the blanks over the coming days and share their perspectives as well.
What also made this workshop interesting was that it was attended by several people that I recognize as some of "the names" in the testing community. The workshop was organized by Lynn McKee and Nancy Kelln, who I affectionately refer to as the "Wonder Twins" of software testing (meaning it's rare to see one without the other, anywhere). Fiona Charles and Janet Gregory were also in attendance. In addition, a variety of software testers from various industries within and around Calgary were present. I was the "Lone Yankee" in the group, so this gave me not just a glimpse into the topic we were discussing, but also the state of software testing in a different country and context than I was used to.
The topic of "Testing as a Service" is an interesting one, and it's one that I've both used, refined and struggled with over the years. There was spirited debate around this topic and the various angles that the speakers took on this. The conversation ranged from defining exactly what was meant by service, whether or not testing be regarded as a service meant that we would always be seen and treated as an "other" entity, and an especially lively debate regarding introverts and extroverts and their roles in the testing process (that one deserves its own blog post, and rest assured, it will be getting one :) ).
My initial take on this is to focus on the aspects of what we mean by "service". In one way, it's very noble. We are indeed providing a resource to a development team and a customer base. The level of commitment and involvement we put into it is a big determinant in what and how we perform, and how well our projects perform in the field. There's also a hidden aspect of the word service that we discussed at great length. That is the fact that, when we describe what we do as a service, we are, effectively, making our involvement to be "other" to the development team. We frequently questioned whether or not "software testing as a service" should stand alone from "software development as a service" or "system operations as a service". The point was made a few times that in some industries in Calgary, the service offerings were very literal, where companies could often decline to even engage software testing for their service aspect.
It has been my own experience that this is often seen as a semantic distinction, but it's a real one. Some might put it down to a choice of words, but it really does run deeper than that. For most software testers, unless you come from a strong programming background, the goals of being 100% integrated with a development team is probably not going to happen. Even in the most Agile of work places, testing is a cost of doing business, and it's an important one. While we are often integrated into development teams, that integration is rarely seamless (from my observation, anyway). In my own work environment, though I have the ability to talk with the programmers and discuss issues or seek out new areas to test, ultimately, I am a somewhat external service, and any time I'm not actively testing is a cost to the team. Many practices work into the ways that we test and collaborate, and we often debated the usefulness of saying that we were there to "serve in collaboration with the programmers".
Another point that was often brought up and clarified was the fact that, in many Agile teams, the term software developer goes well beyond those people who program. Programmers and testers are both software developers in that we are all responsible for making sure that the software grows, is tested and is of as high a quality as possible when we release it to the public. I personally find I can relate to that idea, and in some ways, it helps move the conversation along and gives us all opportunities to participate more directly with the programming team. In my case, though, I am ultimately seen as the last defense between a bug being spotted internally versus out in production. That doesn't change, and ultimately, that's the role I play on the teams I participate with. My service is that of making sure that I apply my wiles and my crafty nature to question software and see how well it stands up to scrutiny. Ultimately, if I don't do that well, then my service is of lesser value. Simple as that.
Expect to see lots more recollections and ideas from POST coming in the next few days.
Tuesday, March 20, 2012
A Tester's "Personal Triumph"
This shot is of me at the Canadian "Continental Divide". The tip of my board is in British Columbia. The tail is in Alberta. |
Quick and dirty recap for new people, on August 29th, 2011, I broke my leg in two places. The tibia was broken clean through two inches above the ankle joint. The fibula was broken clean through two inches below the knee joint. The tibia was reassembled and reinforced with a stainless steel plate. The fibula is basically fending for itself (and by all accounts doing just fine).
There was a genuine concern that has been haunting my dreams for the past six months... will I ever be able to ride a snowboard again? The orthopedic surgeon seemed to think that I could ride again, but that I might lose some performance due to diminished movement in my leg (that's a fact, I have about a 20% range of motion reduction), and the concern that I would feel an inordinate amount of discomfort if I cranked out turns or if I landed jumps.
This past weekend I was up in Calgary, Alberta for the POST testing workshop (and you will be hearing a lot more about that in the coming days, I promise :) ). One of the sweetening deals about going up there was that one of the conference organizers offered to take me up snowboarding at Banff (Sunshine Village specifically, but located in Banff). There was no way I was going to pass up the opportunity to ride in Banff, so I brought all of my gear, minus my board and bindings. I'd rent a board up there so I wouldn't have to put people out by lugging my board around.
On Monday, we made the drive from the Calgary area up through Banff to Sunshine Village. We geared up and we got up to the top of the mountain, grabbed a chair that accessed both green and blue runs, and I faced my moment of truth.
A note about testing on real people in real life; with software, I'm perfectly happy to do truly malevolent things to see if I can bring a system to its knees. That was not my goal today. Instead, I was aiming to see how much I could get away with without causing technical or physical failure. The plate is holding the bone together. It’s also much less flexible than bone would be on its own. It’s also somewhat compromised by the number of screws and pins that are holding it all together. One bad tweak could cause the plate to shear and the pins to pull. Not my idea of a good time. With that, I made sure to spend a little time to stretch, to be as limber as possible, and to ride gingerly at first and build up speed and intensity.
It wasn’t pretty or stylish, but the results were enough to make me smile from ear to ear. I was riding, on a snowboard, on my own power, down the runs that I would most likely ride down, and I was, mostly, doing what I’d always done. There were some noticeable differences. The first being that my right leg fatigues quicker than my left one does. That doesn’t really surprise me, as it’s still not up to the muscular size of the leg as it was before the accident. Landing jumps (even little bumps that give you a touch of air and that have you landing) felt a bit jarring and sharp, so for the time being, I know that big air is certainly not in the near term picture. Unstrapping my rear foot and pushing was a little more bothersome, as I didn’t have the endurance to go as far as I used to. Barring those minor issues, though, I felt about 90% of my former self. All things considered, that’s not too shabby :).
My thanks to my newfound friends in Calgary, Alberta for a great and informative weekend, and my thanks to my friend Lynn for taking me up to the snow to let me cross off a bucket list dream (never ridden in Canada, so glad to now say that Banff has been tracked by yours truly). You’ll be seeing much more about the POST conference in my future posts, so stay tuned!
Friday, March 16, 2012
Being a Fly on the Wall, Or a Plane
Today was a chance to take a break, get some travel out of the way and get myself from San Francisco up to Calgary for the Calgary Perspectives on Software Testing Workshop (POST).
The fun started when I heard news that United and Continental were merging their computer systems this week. I figured "this should be interesting". It definitely became even more interesting when I discovered, after standing in Air Canada's line for a half hour early this morning, that the flight was being serviced by United Express. Meaning I was in the wrong terminal, and had better get a move on over to the Domestic terminal if I wanted to catch my flight.
I hustled on down, and tried to check in at the kiosk, only for the kiosk to tell me my ticket was out of order and that an agent would have to help me. Turns out my order was placed for a paper ticket, even though I processed the whole thing online and had the receipt to prove I was supposed to be getting an e-ticket.
This comedy of errors was resolved and I made my way through security (actually much faster now that I know that stainless steel plates don't set of metal detectors). After making my way to the counter, showing my passport, waiting just a few minutes and then getting on the plane, I figured this was all over... well, not quite.
It seems setting up and merging the two companies (Continental and United) resulted in a rather big amount of software conversion... conversion with limitations. One of the casualties of this conversion was the ability to electronically match the flight manifest with what was stored in the computer. The Flight crew had to do a manual tally of all passengers, and write down and submit this manifest. Not such a big deal on the surface, except that it delayed the flight by about 30 minutes (not a problem for me personally, but I'm sure those making connections in Calgary were none too thrilled).
I'm sure that we'll see something written up about this in the papers at some point, and it will be labeled as a "glitch". Well, no, it's a little more than that; it's a failure of the system's to integrate and there's little in the way of information being shared with the passengers to describe what is happening. I don't mind the errors. Being silent about it is what irritates me.
Still, all's well that ends well. I made it to YYC, a cab took me to my hotel, and I'm just geeking out and focusing on some work until friends come and pick me up for dinner. Looking forward to my time here in Alberta, thanks go out to those who invited me to be here :).
The fun started when I heard news that United and Continental were merging their computer systems this week. I figured "this should be interesting". It definitely became even more interesting when I discovered, after standing in Air Canada's line for a half hour early this morning, that the flight was being serviced by United Express. Meaning I was in the wrong terminal, and had better get a move on over to the Domestic terminal if I wanted to catch my flight.
I hustled on down, and tried to check in at the kiosk, only for the kiosk to tell me my ticket was out of order and that an agent would have to help me. Turns out my order was placed for a paper ticket, even though I processed the whole thing online and had the receipt to prove I was supposed to be getting an e-ticket.
This comedy of errors was resolved and I made my way through security (actually much faster now that I know that stainless steel plates don't set of metal detectors). After making my way to the counter, showing my passport, waiting just a few minutes and then getting on the plane, I figured this was all over... well, not quite.
It seems setting up and merging the two companies (Continental and United) resulted in a rather big amount of software conversion... conversion with limitations. One of the casualties of this conversion was the ability to electronically match the flight manifest with what was stored in the computer. The Flight crew had to do a manual tally of all passengers, and write down and submit this manifest. Not such a big deal on the surface, except that it delayed the flight by about 30 minutes (not a problem for me personally, but I'm sure those making connections in Calgary were none too thrilled).
I'm sure that we'll see something written up about this in the papers at some point, and it will be labeled as a "glitch". Well, no, it's a little more than that; it's a failure of the system's to integrate and there's little in the way of information being shared with the passengers to describe what is happening. I don't mind the errors. Being silent about it is what irritates me.
Still, all's well that ends well. I made it to YYC, a cab took me to my hotel, and I'm just geeking out and focusing on some work until friends come and pick me up for dinner. Looking forward to my time here in Alberta, thanks go out to those who invited me to be here :).
Wednesday, March 14, 2012
WebDriver and Jenkins and Robots, Oh My!
Another Wednesday, another San Francisco Selenium Meetup. It's been ages since I've hit two Selenium meetups in a row, so my thanks for holding them on Wednesday nights lately.
Our intrepid flock of fellow nerds has gathered at Eventbrite, which is just a couple of blocks from Caltrain (yay!) and they've brought in bahn-mi sandwiches of a dizzying variety, beer, soda and water a plenty, and there's a standing room only crowd.
Tonight, we are learning about how to use Selenium (webDriver) with Jenkins, which is a Continuous Integration server I've wanted to play with for some time. Jenkins Creator Kohsuke Kawaguchi gave a fast paced presentation and demo of Jenkins and how it can interact with the Selenium Grid. He also shared some frustrations and some requests for how selenium could be extended, such as not having to programmatically create the URL, allow for easier proxy settings, and a native report generator (output can be captured with code hacks already, but having a native report formatter would be a sweet addition).
Our intrepid flock of fellow nerds has gathered at Eventbrite, which is just a couple of blocks from Caltrain (yay!) and they've brought in bahn-mi sandwiches of a dizzying variety, beer, soda and water a plenty, and there's a standing room only crowd.
Tonight, we are learning about how to use Selenium (webDriver) with Jenkins, which is a Continuous Integration server I've wanted to play with for some time. Jenkins Creator Kohsuke Kawaguchi gave a fast paced presentation and demo of Jenkins and how it can interact with the Selenium Grid. He also shared some frustrations and some requests for how selenium could be extended, such as not having to programmatically create the URL, allow for easier proxy settings, and a native report generator (output can be captured with code hacks already, but having a native report formatter would be a sweet addition).
Selenium's creator, Jason Huggins, sporting a very hipster beard these days, showed off his robot called Bitbeam. Now maybe the idea of a robot doesn't seem all that interesting, but how about a robot that can play Angry Birds? Well, that's what Bitbeam does. It uses an actuator to act as a finger, and that finger gets lowered onto a table, and that actuator actually fires Angry Birds at targets (quite well, I might add :) ). You can check out the back story of the robot at bitbeam.org. The sub-title of the talk was "How Robots are the Future of Testing". The idea is that the robot could potentially be used to automate testing on smart phones or tablets. It's an intriguing idea to do location type tests. It's automated "manual" testing taken to a whole new level.
Theo Cincott and John Shuping, both engineers with Eventbrite, took the last spot to show off specifically how they are using Jenkins and Selenium in their production environment. Eventbrite deploys each week, which makes sense considering their market (they sell tickets to events, so they need to update regularly). They also do daily bug fix releases. In Eventbrite's environment, Jenkins is the central hub of their builds, their firing off for tests (unit and Selenium) then to their production systems and the rest of the world. So often we hear people talk about how they do this stuff with little in the way of actual details. This talk shared the flow and the challenges they face when they are doing their deployments. Too much to type here, but I'm sure the slides will be posted soon :).
Again, my thanks to Ashley Wilson and the rest of the team at SauceLabs that puts on these meetups month after month for us nerds to see how our fellow nerds are using this stuff and making it work in their environments. It helps make those of us still learning about all this get an opportunity to get just that much better.
Happy Belated 2nd Birthday TESTHEAD
With so many things going on this past weekend, such as closing up Foundations, getting ready to start Test Design, finalizing things for my trip to Calgary to participate in the POST workshop and follow on the next weekend to head down to New Orleans for the AST Board meeting and the first day of STPCon... I realized that I forgot to mark a milestone. On Saturday, March 10th, TESTHEAD turned two years old!
The fact is, none of the things I am doing this week would likely be happening had I not taken that step. Creating TESTHEAD was my gateway into the wild and woolly world of "sapient software testing" and the community of madcap crazies that is associated with it. From very humble beginnings and months of articles that no one read, it has slowly grown to a point where I receive on average 300 page views each day. TESTHEAD led to my receiving comments and starting conversations with various luminaries in my field. It led to a joint venture with Matt Heusser in developing and producing a podcast that we now all know as TWiST. It was the reason I put myself on Twitter in the first place, and where the idea of being "the TESTHEAD" and associating myself with a crash test dummy as a humorous aside somehow resonated with a great group of people.
I wrote one of my most popular posts titled "Well, How Did I Get Here" to describe what happened in the 9 months that first followed creating TESTHEAD. At the one year mark, I was working for a different company and taking my Lone Tester view into an Agile team (and sharing the fact that it was not always an easy fit or wondering what I was doing here. That journey has gotten a little easier in the sense that I've embraced learning more about coding, taking on automation projects, and striving to find ways I could provide my testing ideas in new and somewhat different ways for a company that had never had a tester before. The great thing is I've had so many join along with me in the adventure.
During TESTHEAD's toddler year I was elected to the Board of Directors for the Association for Software Testing, and with that I was given the opportunity to step in and become the Chair of the Education Special Interest Group and now, the whole delivery of BBST for AST members is my responsibility!
Weekend Testing Americas has taken on a life and a mind of its own, it seems, which is great because it's the opportunities that the participants bring to it that makes it a living and lively entity. WTA has also helped me in ways to become a much better Test Manager and testing mentor. Furthermore, it has provided me with plenty of opportunities to talk about and share the model with conference audiences and small group talks over this past year.
A cool thing that also happened this year was the collaboration with Aaron Scott and his Two Leaf Clover strip, and being able to share, and laugh, and comment on some of the strips that have hit me in a funny or entertaining way. I also wonder if some of his audience has stuck around to read more of the TESTHEAD archive as well. The topper of all this is that he dedicated his strip "One for the Road" to me. It's the one in the upper right hand corner of my site :).
To put it simply, I don't think any of this would have happened had I not decided to put my brain on a chopping block and open myself up to potential ridicule. I think the trade has thus far been worth it. Fore those who have been part of this journey with me from the beginning, thanks for letting me reach this stage, and here's hoping I don't suffer from"the terrible twos" :). For those who are more recent discoverers of TESTHEAD, I thank you for making this site a regular destination, and I hope that cool and interesting things will be talked about here and that you will think of them as such and tell your friends about it. It's been a really fun ride so far.
The fact is, none of the things I am doing this week would likely be happening had I not taken that step. Creating TESTHEAD was my gateway into the wild and woolly world of "sapient software testing" and the community of madcap crazies that is associated with it. From very humble beginnings and months of articles that no one read, it has slowly grown to a point where I receive on average 300 page views each day. TESTHEAD led to my receiving comments and starting conversations with various luminaries in my field. It led to a joint venture with Matt Heusser in developing and producing a podcast that we now all know as TWiST. It was the reason I put myself on Twitter in the first place, and where the idea of being "the TESTHEAD" and associating myself with a crash test dummy as a humorous aside somehow resonated with a great group of people.
I wrote one of my most popular posts titled "Well, How Did I Get Here" to describe what happened in the 9 months that first followed creating TESTHEAD. At the one year mark, I was working for a different company and taking my Lone Tester view into an Agile team (and sharing the fact that it was not always an easy fit or wondering what I was doing here. That journey has gotten a little easier in the sense that I've embraced learning more about coding, taking on automation projects, and striving to find ways I could provide my testing ideas in new and somewhat different ways for a company that had never had a tester before. The great thing is I've had so many join along with me in the adventure.
During TESTHEAD's toddler year I was elected to the Board of Directors for the Association for Software Testing, and with that I was given the opportunity to step in and become the Chair of the Education Special Interest Group and now, the whole delivery of BBST for AST members is my responsibility!
Weekend Testing Americas has taken on a life and a mind of its own, it seems, which is great because it's the opportunities that the participants bring to it that makes it a living and lively entity. WTA has also helped me in ways to become a much better Test Manager and testing mentor. Furthermore, it has provided me with plenty of opportunities to talk about and share the model with conference audiences and small group talks over this past year.
A cool thing that also happened this year was the collaboration with Aaron Scott and his Two Leaf Clover strip, and being able to share, and laugh, and comment on some of the strips that have hit me in a funny or entertaining way. I also wonder if some of his audience has stuck around to read more of the TESTHEAD archive as well. The topper of all this is that he dedicated his strip "One for the Road" to me. It's the one in the upper right hand corner of my site :).
To put it simply, I don't think any of this would have happened had I not decided to put my brain on a chopping block and open myself up to potential ridicule. I think the trade has thus far been worth it. Fore those who have been part of this journey with me from the beginning, thanks for letting me reach this stage, and here's hoping I don't suffer from"the terrible twos" :). For those who are more recent discoverers of TESTHEAD, I thank you for making this site a regular destination, and I hope that cool and interesting things will be talked about here and that you will think of them as such and tell your friends about it. It's been a really fun ride so far.
Tuesday, March 13, 2012
From Master To Student Again
AST's BBST Test Design class is now underway, with a slightly unexpected modification. When we ran the pilot curse, I assisted Cem in teaching it, which meant I had about one week's more involvement with the material than the participants did, and the old maxim "you can teach someone anything if you are two chapters ahead" certainly held, but it was also challenging to do it in that manner.
As we set up for the first run of the official class, and we accepted the participants, Cem made a suggestion to me. He said "Michael, how would you like to be a student this time around? You didn't get that opportunity for the Pilot, and if you are going to be leading these classes, it's important that you also see the course from the participants perspective". How could I argue with that :)?
So as of Sunday, I am now a participant in Test Design, and I get to be a student again. This is meshing pretty well overall. I did my Foundations class in early 2010, my Bug Advocacy class in early 2011, and now Test Design in early 2012. This will also give me a chance to see how well I understand the material I helped teach the first time around, and if this experience and a different perspective will help open up answers that I wasn't aware of before. In any event, this will be a cool experience, and I expect to learn a great deal from doing it.
It does, however, mean I will have to ask some indulgence from my Foundations class participants to give me a little bit more time to give them additional feedback. Much as I want to, I cannot stuff more than 24 hours into a day, and contrary to popular belief I really do need to sleep sometimes :).
As we set up for the first run of the official class, and we accepted the participants, Cem made a suggestion to me. He said "Michael, how would you like to be a student this time around? You didn't get that opportunity for the Pilot, and if you are going to be leading these classes, it's important that you also see the course from the participants perspective". How could I argue with that :)?
So as of Sunday, I am now a participant in Test Design, and I get to be a student again. This is meshing pretty well overall. I did my Foundations class in early 2010, my Bug Advocacy class in early 2011, and now Test Design in early 2012. This will also give me a chance to see how well I understand the material I helped teach the first time around, and if this experience and a different perspective will help open up answers that I wasn't aware of before. In any event, this will be a cool experience, and I expect to learn a great deal from doing it.
It does, however, mean I will have to ask some indulgence from my Foundations class participants to give me a little bit more time to give them additional feedback. Much as I want to, I cannot stuff more than 24 hours into a day, and contrary to popular belief I really do need to sleep sometimes :).
Monday, March 12, 2012
My Kingdom for an Even Keel?
One of the interesting things about a relationship, especially one that you have had for over 20 years, as in the case of my wife Christina and I, is that you often overlook things that each other does. We had a discussion about this tonight during dinner. I'm not sure what precipitated it, but I was talking about how I had to get some stuff done, and that it would be a good time to "knock out some Pomodoros". I said this mostly to myself, but she heard it and asked me "what are Pomodoros?" I then proceeded to explain to her the whole concept behind using a timer to designate breaks and focused work. She nodded, said OK, and then we changed the subject, but I noticed she was a little on edge.
Later, as I was cleaning up the dishes, I asked her if everything was OK. She looked at me and said "You know, something between complete silence and information overload would be really nice sometimes." That's a paraphrase, and I don't meant to make it sound negative, but we were together making a point. This has been part of who I am most of my life. Either I am absorbed in quiet thought and non-communicative, or I am spilling out words at 100 miles an hour. Either I am running on all cylinders and blazing down the track with frightening energy and speed, or I am crashed out asleep. Well, not quite crashed out asleep, but there is a definite unevenness to the way I do a lot of things, and I have to be aware and careful of what I pick up and what I commit to doing.
I compare this to Christina, who works in a very different manner than I do. She is very middle of the road in the things that she does, and she has the presence of mind to know she likes it that way and works to make sure she can keep that "even keel". Christina is the type of person that will learn she is teaching a class at church in a month, and then spends a little bit of time each and every day working on her lesson plan for that one hour presentation. She doesn't do a lot at a time, but she does a few minutes here, a few minutes there, and spreads it out among the other things she does. By result, she tends to navigate her life in a fairly straight line.
On the other hand, I have been a creature of riding the waves. Often times I get into situations that will require a Herculean amount of effort, or that will have me engaged in several things simultaneously. Often I do well in this regard, and other times I crash into walls. The net result is that I am very much one who does a lot of high intensity, endorphin releasing activity, but it's also followed by fallow periods where I then have to build back into the frenzy. I realized a long time ago that this is a manifestation of ADD, and I'm OK with that. When I was younger, I used medication to try to control these wild swings and give me a more even keel, but ultimately, I decided that I liked the wild swings. With the even keel, I could be consistent, but I also felt a lot less creative, and I ultimately felt like my output was mediocre. It was certainly stable, and I could pass tests and get good grades, but I don't think that it would have helped me reach some important things in my life if I didn't have the waves to work with. I don't think I would have ever written the songs I did, taken the chances to be a musician, or gone in a number of other quixotic directions that ultimately informed and helped shape my life. It was a safe route, and I decided that I didn't want to play it safe. It's for these reasons I decided to not go the medication route, and just try my best to let people know that I rode waves and was very much aware of it.
It's been over 20 years since I last medicated for this, and while I will admit, it's still sometimes an all or nothing game with me, and I have to fight between being all in or needing to shut down for a day or two, I've come to accept that and take the good that comes with those challenges. Ultimately, I think I made a good trade. To those who have to deal with me on a daily basis, I guess your mileage may vary. One thing's for certain, I'm happy to have a wife who is able to drive a boat in a straight line... it makes it possible for me to surf the waves a lot easier, and I'm grateful for that fact that, most of time, she doesn't really mind :).
Later, as I was cleaning up the dishes, I asked her if everything was OK. She looked at me and said "You know, something between complete silence and information overload would be really nice sometimes." That's a paraphrase, and I don't meant to make it sound negative, but we were together making a point. This has been part of who I am most of my life. Either I am absorbed in quiet thought and non-communicative, or I am spilling out words at 100 miles an hour. Either I am running on all cylinders and blazing down the track with frightening energy and speed, or I am crashed out asleep. Well, not quite crashed out asleep, but there is a definite unevenness to the way I do a lot of things, and I have to be aware and careful of what I pick up and what I commit to doing.
I compare this to Christina, who works in a very different manner than I do. She is very middle of the road in the things that she does, and she has the presence of mind to know she likes it that way and works to make sure she can keep that "even keel". Christina is the type of person that will learn she is teaching a class at church in a month, and then spends a little bit of time each and every day working on her lesson plan for that one hour presentation. She doesn't do a lot at a time, but she does a few minutes here, a few minutes there, and spreads it out among the other things she does. By result, she tends to navigate her life in a fairly straight line.
On the other hand, I have been a creature of riding the waves. Often times I get into situations that will require a Herculean amount of effort, or that will have me engaged in several things simultaneously. Often I do well in this regard, and other times I crash into walls. The net result is that I am very much one who does a lot of high intensity, endorphin releasing activity, but it's also followed by fallow periods where I then have to build back into the frenzy. I realized a long time ago that this is a manifestation of ADD, and I'm OK with that. When I was younger, I used medication to try to control these wild swings and give me a more even keel, but ultimately, I decided that I liked the wild swings. With the even keel, I could be consistent, but I also felt a lot less creative, and I ultimately felt like my output was mediocre. It was certainly stable, and I could pass tests and get good grades, but I don't think that it would have helped me reach some important things in my life if I didn't have the waves to work with. I don't think I would have ever written the songs I did, taken the chances to be a musician, or gone in a number of other quixotic directions that ultimately informed and helped shape my life. It was a safe route, and I decided that I didn't want to play it safe. It's for these reasons I decided to not go the medication route, and just try my best to let people know that I rode waves and was very much aware of it.
It's been over 20 years since I last medicated for this, and while I will admit, it's still sometimes an all or nothing game with me, and I have to fight between being all in or needing to shut down for a day or two, I've come to accept that and take the good that comes with those challenges. Ultimately, I think I made a good trade. To those who have to deal with me on a daily basis, I guess your mileage may vary. One thing's for certain, I'm happy to have a wife who is able to drive a boat in a straight line... it makes it possible for me to surf the waves a lot easier, and I'm grateful for that fact that, most of time, she doesn't really mind :).
Friday, March 9, 2012
Follow the Crumbs
Just a quick but link heavy note today.
Albert Gareev and I have been collaborating on some articles related to test automation. Albert is really good at working with and designing frameworks. I'm really good at asking lots of possibly stupid questions. Put us together, and we've developed a cool way to determine if your test automation efforts are on track.
This is all an elaborate ruse to get you to go to Software Test Professional's site and read our latest article "Follow the Crumbs to Evaluate Automation". If that isn't enough for you, then download the entire March 2012 issue of ST&QA Magazine.
If that isn't enough for you, and if the CRUMBS story interests you, you might find the story that we did in the January 2012 issue of ST&QA interesting. Look for the article "Coming to TERMS With Test Automation". It's the prequel to our current article.
Finally, if you want to hear about some of my own challenges and foibles dealing with test automation, I got to spend a good chunk of TWiST #86 talking about it.
As always, you do need to register with the site to get to these links, but registration costs you nothing, and we certainly hope the content will make it worth your while to do so. As always, if you have any questions or comments, good or bad, I'd love to hear them :).
Albert Gareev and I have been collaborating on some articles related to test automation. Albert is really good at working with and designing frameworks. I'm really good at asking lots of possibly stupid questions. Put us together, and we've developed a cool way to determine if your test automation efforts are on track.
This is all an elaborate ruse to get you to go to Software Test Professional's site and read our latest article "Follow the Crumbs to Evaluate Automation". If that isn't enough for you, then download the entire March 2012 issue of ST&QA Magazine.
If that isn't enough for you, and if the CRUMBS story interests you, you might find the story that we did in the January 2012 issue of ST&QA interesting. Look for the article "Coming to TERMS With Test Automation". It's the prequel to our current article.
Finally, if you want to hear about some of my own challenges and foibles dealing with test automation, I got to spend a good chunk of TWiST #86 talking about it.
As always, you do need to register with the site to get to these links, but registration costs you nothing, and we certainly hope the content will make it worth your while to do so. As always, if you have any questions or comments, good or bad, I'd love to hear them :).
Thursday, March 8, 2012
A Dude's Thoughts for Testing Education
Yesterday I posted an entry for the Association for Software Testing blog. It was also forwarded to the rest of the membership via their newsletter. In it, I made clear the fact that I will be taking over as the chair of the Education Special Interest Group (EdSIG) on March 31, 2012. But that wasn't all. I also stated that I had another "bold boast" up my sleeve, potentially my boldest boast yet.
I stated that, while I both admired the work and value of the content that is available through the Black Box Software Testing (BBST) courses, and I personally found them to be very valuable, there are many members of AST who will never take the classes. The reasons are varied but they often come down to one thing. BBST's three classes, as designed, is the equivalent of a university semester course on software testing. You get a lot of one on one time with instructors and assistants, we provide feedback and direct grading, we coach people directly. For those willing to commit to it, it is immensely beneficial, but there's no question about it, you are being asked to set aside a significant amount of time to do it.
For the people that will take the classes, there's little that needs to be done to convince them to do so. For those that will not take them, there's little that we can do to convince them to do so. With this in mind, I'd like to try something different.
I've seen these examples in things like NetTuts+ and Zed Shaw's and Rob Sobers' "Learn Ruby the Hard Way". These are specific, targeted, longer examples of learning, ways to get into the muck and do stuff, directly, with a dose of humor, and a lot of practical focus. There are many topics in testing that are just causally touched upon, because going into them in depth would be a huge undertaking. Describing context-driven testing principles alone has so many possible variations. Is it any wonder that we often reach for the overtired phrase "it depends"? It's 100% true, it's totally accurate, and most of the time, it's completely unhelpful. Wouldn't it be much more beneficial to gather a number of examples and actually show the differences? We often speak of polar opposites like a MMO video game and a pacemaker, and set these up as our examples of why context matters. I do not disagree, but specifically, what do they do that is different? What do they do that is similar? Why do they make the decisions they do? How can we encapsulate that in a meaningful way for testers to see, experience and consider?
It's with this that I want to look at the areas I'm already familiar with and expand the conversation to them. I'd like to see AST podcasts, screen-casts and video-casts taking on these areas. What's more, I'd like to see more voices included in the discussions. Cem has devoted literally thousands of hours over the years to recording the lectures he uses for BBST. It's been a monumental work on his part and I have no intention to redo or replace them. I also know that I personally don't have the time all on my own to do new video entirely. What I'd like to see is video conversations and examples explained by people in various industries. We hear all the time about the differences between finance, web, medical, government and academia, and many others. There are testers in all of those spheres. Who would you rather hear talk about their testing challenges and triumphs. Me? Or them? I'd much rather hear from them, from YOU, and I hope to find ways to include YOU in on the conversations and developments we make.
All this is my possibly over zealous and San Juan Hill charging way (or it might be a Little Big Horn charging way, time will tell) of saying "this dude is looking to make some new ways to look at testing education". I am not going to be able to do it alone. Are you willing to help me by lending your voice, your experience and your successes with me, so that we can help teach others and give US the tools to do more and be more? If so, Dudes and Dudettes, leave me a reply and lets get rockin'!!!
I stated that, while I both admired the work and value of the content that is available through the Black Box Software Testing (BBST) courses, and I personally found them to be very valuable, there are many members of AST who will never take the classes. The reasons are varied but they often come down to one thing. BBST's three classes, as designed, is the equivalent of a university semester course on software testing. You get a lot of one on one time with instructors and assistants, we provide feedback and direct grading, we coach people directly. For those willing to commit to it, it is immensely beneficial, but there's no question about it, you are being asked to set aside a significant amount of time to do it.
For the people that will take the classes, there's little that needs to be done to convince them to do so. For those that will not take them, there's little that we can do to convince them to do so. With this in mind, I'd like to try something different.
I've seen these examples in things like NetTuts+ and Zed Shaw's and Rob Sobers' "Learn Ruby the Hard Way". These are specific, targeted, longer examples of learning, ways to get into the muck and do stuff, directly, with a dose of humor, and a lot of practical focus. There are many topics in testing that are just causally touched upon, because going into them in depth would be a huge undertaking. Describing context-driven testing principles alone has so many possible variations. Is it any wonder that we often reach for the overtired phrase "it depends"? It's 100% true, it's totally accurate, and most of the time, it's completely unhelpful. Wouldn't it be much more beneficial to gather a number of examples and actually show the differences? We often speak of polar opposites like a MMO video game and a pacemaker, and set these up as our examples of why context matters. I do not disagree, but specifically, what do they do that is different? What do they do that is similar? Why do they make the decisions they do? How can we encapsulate that in a meaningful way for testers to see, experience and consider?
It's with this that I want to look at the areas I'm already familiar with and expand the conversation to them. I'd like to see AST podcasts, screen-casts and video-casts taking on these areas. What's more, I'd like to see more voices included in the discussions. Cem has devoted literally thousands of hours over the years to recording the lectures he uses for BBST. It's been a monumental work on his part and I have no intention to redo or replace them. I also know that I personally don't have the time all on my own to do new video entirely. What I'd like to see is video conversations and examples explained by people in various industries. We hear all the time about the differences between finance, web, medical, government and academia, and many others. There are testers in all of those spheres. Who would you rather hear talk about their testing challenges and triumphs. Me? Or them? I'd much rather hear from them, from YOU, and I hope to find ways to include YOU in on the conversations and developments we make.
All this is my possibly over zealous and San Juan Hill charging way (or it might be a Little Big Horn charging way, time will tell) of saying "this dude is looking to make some new ways to look at testing education". I am not going to be able to do it alone. Are you willing to help me by lending your voice, your experience and your successes with me, so that we can help teach others and give US the tools to do more and be more? If so, Dudes and Dudettes, leave me a reply and lets get rockin'!!!
Wednesday, March 7, 2012
TESTHEAD On The Road
It seemed like such a little thing at the time :)!
Note, I'm not complaining, I'm happy to have the opportunities that come with being a passionate advocate, but sometimes I wonder when and how my reality got to be this packed so close together.
First, I'm very excited to be doing my first facilitation of a peer conference. My friends Lynn McKee and Nancy Kelln invited me to come up and act as facilitator for The Calgary Perspectives on Software Testing Workshop (POST). It's an annual peer workshop for software test practitioners, most attendees are from Canada, but some others are traveling from elsewhere. I'm excited to have been asked to participate, and I'm looking forward to being of service.
I'm home for just a couple of days, and then I fly out again to New Orleans for the Board Meeting for the Association for Software Testing, and while I'm there, I'm hoping to see if there's something I can do that Monday so that I can get to know some more people in the Software Test Professionals' Community attending STPCon (it's a tutorial day, and I'm not officially signed up for the conference, but I am hoping I'll be able to get involved some how for a little bit and get to meet and talk with people there.
I get a to week respite, and then it's of to Orlando, Florida for STAREast. I'm excited for this because of two reasons. First, I'm honored and privileged to have been chosen to present, but additionally, it's my second chance to present my talk on Weekend Testing that I couldn't present due to my broken leg last year (I had originally written it for the Pacific Northwest Software Quality Conference in 2011 in Portland, Oregon). Lee Copeland liked the content of the talk and asked if I'd be willing to present it at STAREast, to which I said yes, and thus, I'll be there to do so.
Truly, that's more traveling in a five week period than I have ever done before. It's been a challenge getting the rest of my life to align with these realities, but it's also a thrill that I have these opportunities in the first place. I've often joked that one could find opportunities in many places if they were willing to lift up the rocks and look for them. In many cases, when people know what you are up to and what you are willing to do and represent, they will also seek you out and give you the chance to make good on participating in those opportunities. I feel a bit overwhelmed at the immediate moment, but it's a good overwhelmed. It feels like I'm needed, and that people want to hear what I have to say and what I can contribute to the greater cause of testing. For that, I am both excited and grateful.
Note, I'm not complaining, I'm happy to have the opportunities that come with being a passionate advocate, but sometimes I wonder when and how my reality got to be this packed so close together.
First, I'm very excited to be doing my first facilitation of a peer conference. My friends Lynn McKee and Nancy Kelln invited me to come up and act as facilitator for The Calgary Perspectives on Software Testing Workshop (POST). It's an annual peer workshop for software test practitioners, most attendees are from Canada, but some others are traveling from elsewhere. I'm excited to have been asked to participate, and I'm looking forward to being of service.
I'm home for just a couple of days, and then I fly out again to New Orleans for the Board Meeting for the Association for Software Testing, and while I'm there, I'm hoping to see if there's something I can do that Monday so that I can get to know some more people in the Software Test Professionals' Community attending STPCon (it's a tutorial day, and I'm not officially signed up for the conference, but I am hoping I'll be able to get involved some how for a little bit and get to meet and talk with people there.
I get a to week respite, and then it's of to Orlando, Florida for STAREast. I'm excited for this because of two reasons. First, I'm honored and privileged to have been chosen to present, but additionally, it's my second chance to present my talk on Weekend Testing that I couldn't present due to my broken leg last year (I had originally written it for the Pacific Northwest Software Quality Conference in 2011 in Portland, Oregon). Lee Copeland liked the content of the talk and asked if I'd be willing to present it at STAREast, to which I said yes, and thus, I'll be there to do so.
Truly, that's more traveling in a five week period than I have ever done before. It's been a challenge getting the rest of my life to align with these realities, but it's also a thrill that I have these opportunities in the first place. I've often joked that one could find opportunities in many places if they were willing to lift up the rocks and look for them. In many cases, when people know what you are up to and what you are willing to do and represent, they will also seek you out and give you the chance to make good on participating in those opportunities. I feel a bit overwhelmed at the immediate moment, but it's a good overwhelmed. It feels like I'm needed, and that people want to hear what I have to say and what I can contribute to the greater cause of testing. For that, I am both excited and grateful.
Sunday, March 4, 2012
Inflicting Help
There are some benefits to my being actively engaged in many causes. My wife, Christina, often comments that she likes it when I have a lot on my plate and that I'm actively engaged in various causes. I thought that this was because she admired what I did. Turns out, there's a much deeper and darker reason... she likes when I'm engaged in other things because it keeps me from "puttering around the house".
See, there's a certain danger when you are married to a tester. Testers are fond of asking "is there a problem here?" and applying it to just about everything. I know this to be true because, this weekend, I deliberately made a point of "dis-engaging" from some things so that I could breathe a little bit, get some rest, recharge my batteries, and just not feel like I had to be running on all cylinders. Of course, all it takes is a few minutes and we start to see things that could be "improved" or "tweaked" to work better. Christina dreads when I do this (LOL!).
An example. We had a really nice dinner this evening, and Christina used some of the fancy bowls that we have that look like they'd be right at home in a high end Thai restaurant. They are beautiful, but they are large and various depths and they just don't fit well in the dishwasher. As I pulled everything out to reload the dishwasher, I let out the phrase "you know, if we used our standard dishes, this would be much easier to load"... and I'm sure a good part of the population knows where this is going, don't you? What was a simple comment on my part turned into a fairly lengthy discussion about how this was a special meal and that she doesn't use these bowls very often and why was I butting my nose into how she cooked and managed meals anyway? I committed a fairly common sin here... I inflicted help where it wasn't really wanted.
Not content to leave well enough alone, later on, I was helping my daughter with a project, and I noticed that she was working on a drawing from a computer screen image. as I watched her draw and scoot things around on the screen, I thought "you know, monitors are not very expensive, and we could easily get her a monitor that is double the size and clarity of the one that we have right now... oh, but it would mean I'd have to remove the top part of the desk... well, is that really all that useful? Let me ask Christina." Really? Am I going to do this again?! Twice in one day? Of course I am!
Needless to say, that didn't go over very well, especially when I brought up the thoughts of the benefits of re-purposing the room and making it more efficient. Her response? "Well, if space is such a concern, imagine how much space we could reclaim if that gargantuan fish tank were disassembled and moved out of the room. Now that would give us some major space in that room!" Needless to say, I was less than enamored with that idea, but it made the point very clear to me. I often look to places where improvements can be made that require little of my own habits to change, or barring that, where the changes I make keep many of the "features' I care about, without necessarily looking at the features my wife or my kids care about. Honestly, they are often not the same things.
Thus, for the time being, I will keep my notes and my observations to myself, or at least be less vocal about them unnless or until Christina brings up a problem area herself and asks for my opinion. I will also go back to my guiding mantra from my work environments and better implement them at home. Know when your input is of value and can help with genuine issues, but likewise, know when you are just inflicting help. If it's the latter, it's rarely going to go well :).
See, there's a certain danger when you are married to a tester. Testers are fond of asking "is there a problem here?" and applying it to just about everything. I know this to be true because, this weekend, I deliberately made a point of "dis-engaging" from some things so that I could breathe a little bit, get some rest, recharge my batteries, and just not feel like I had to be running on all cylinders. Of course, all it takes is a few minutes and we start to see things that could be "improved" or "tweaked" to work better. Christina dreads when I do this (LOL!).
An example. We had a really nice dinner this evening, and Christina used some of the fancy bowls that we have that look like they'd be right at home in a high end Thai restaurant. They are beautiful, but they are large and various depths and they just don't fit well in the dishwasher. As I pulled everything out to reload the dishwasher, I let out the phrase "you know, if we used our standard dishes, this would be much easier to load"... and I'm sure a good part of the population knows where this is going, don't you? What was a simple comment on my part turned into a fairly lengthy discussion about how this was a special meal and that she doesn't use these bowls very often and why was I butting my nose into how she cooked and managed meals anyway? I committed a fairly common sin here... I inflicted help where it wasn't really wanted.
Not content to leave well enough alone, later on, I was helping my daughter with a project, and I noticed that she was working on a drawing from a computer screen image. as I watched her draw and scoot things around on the screen, I thought "you know, monitors are not very expensive, and we could easily get her a monitor that is double the size and clarity of the one that we have right now... oh, but it would mean I'd have to remove the top part of the desk... well, is that really all that useful? Let me ask Christina." Really? Am I going to do this again?! Twice in one day? Of course I am!
Needless to say, that didn't go over very well, especially when I brought up the thoughts of the benefits of re-purposing the room and making it more efficient. Her response? "Well, if space is such a concern, imagine how much space we could reclaim if that gargantuan fish tank were disassembled and moved out of the room. Now that would give us some major space in that room!" Needless to say, I was less than enamored with that idea, but it made the point very clear to me. I often look to places where improvements can be made that require little of my own habits to change, or barring that, where the changes I make keep many of the "features' I care about, without necessarily looking at the features my wife or my kids care about. Honestly, they are often not the same things.
Thus, for the time being, I will keep my notes and my observations to myself, or at least be less vocal about them unnless or until Christina brings up a problem area herself and asks for my opinion. I will also go back to my guiding mantra from my work environments and better implement them at home. Know when your input is of value and can help with genuine issues, but likewise, know when you are just inflicting help. If it's the latter, it's rarely going to go well :).
Friday, March 2, 2012
Pomodairo: A Cute Way To Stay On Target
Let's face it, procrastination is a reality that most of us deal with at one point or another. There are a zillion and one ways to deal with it and put into practice methods and techniques that will help us overcome it.
In truth, I don't really think we can overcome it, and I also don't think we entirely should. Just as life is what happens when we are busy making other plans, our bodies and minds tell us when it's time to do important things, and when it's time to do anything but those important things.
One of the great secrets of doing, well, anything that requires focus is the willingness to get your head in the game, and do so for a specific length of time. There's lots of approaches to this. My favorite is Merlin Mann's (10+2)x5 Procrastination Hack, which I've talked about before (10 minutes of focus, 2 minutes of break, done successively five times}. At the end of it, you have 50 minutes of focused work and 10 minutes of distraction. The Pomodoro technique is built on the same idea, except that it uses 25 minute periods of focus and 5 minute periods of break.
This is all an elaborate way to tell you about a timer that I downloaded called "Pomodairo". It's an app that runs on my Mac, and it allows for setting time periods that will make the focused time and break time calculations, whether I go with a classic Pomodoro interval or (10+2)x5 approach. Pomodairo is a Google project built on the Adobe Air platform. Pomodairo includes the timer, as well as the ability to construct task lists for the pomodoros. The application can also keep track of completed pomodoros, interruptions and other things that will help you actively track your time and focus during the day. Pomodairo also has the ability to synchronize tasks if you are using multiple computers.
Best of all, this is a freeware app (of course, if you like it, they have a Donate button to contribute to the cause).
Procrastination is real, but it doesn't have to sap all of your energy. Sometimes a little gimmick is all it takes to get focused and get into a zone to be productive. Pomodairo is a simple way to do exactly that.
For more about the Pomodoro technique of time management, go to
http://www.pomodorotechnique.com.
Thursday, March 1, 2012
Ask the TESTHEAD
I have an interesting opportunity.
Software Test and Quality Assurance (ST/QA) Magazine runs a feature called "Ask the Tester" where they pick anywhere from 10 to 15 questions and present them to the person selected. They then answer those questions and they become an article in the magazine.
For the May issue, that tester will be me :).
A number of people have asked me various questions already, but I wanted to throw this open to anyone who would be interested in participating. If you would like to ask me a question that has something to do with software testing, please include it as a comment to this message I will then submit the questions to ST/QA and they will pick the ones that they would like to use. If you would like to leave a Name and a City where you are located, I can pass that along as well as part of the question. If you don't leave a name, we'll just include "Name withheld" with the question.
So here's your chance.If you've ever had a question you have ever wanted to ask me (again that's related to software testing ;) ), now's your chance!
Software Test and Quality Assurance (ST/QA) Magazine runs a feature called "Ask the Tester" where they pick anywhere from 10 to 15 questions and present them to the person selected. They then answer those questions and they become an article in the magazine.
For the May issue, that tester will be me :).
A number of people have asked me various questions already, but I wanted to throw this open to anyone who would be interested in participating. If you would like to ask me a question that has something to do with software testing, please include it as a comment to this message I will then submit the questions to ST/QA and they will pick the ones that they would like to use. If you would like to leave a Name and a City where you are located, I can pass that along as well as part of the question. If you don't leave a name, we'll just include "Name withheld" with the question.
So here's your chance.If you've ever had a question you have ever wanted to ask me (again that's related to software testing ;) ), now's your chance!
A Little Deconstruction: Selenium Meetup, San Francisco
It's been awhile since I've had a chance to get out to one of the SF Selenium meet-up nights, so I was glad to get the chance to get out and see some old friends, have some good food, hang out at the Huddler office, and hear an interesting talk about breaking down and understanding the plumbing and inner workings of Remote WebDriver.
Santiago Suarez Odonez led the discussion this evening, specifically focusing on the topic of "Stripping Down Remove WebDriver". This was a different kind of a talk as opposed to what I have usually seen. This talk was heavy on code and on implementation rather than how-to. For the first time at one of these things, I felt strangely... able to follow along. Many of the code examples that tended to feel very esoteric before were much more accessible. I actually felt like I was able to follow along at least 80% of the way (usually, I feel lost about halfway through these kind of talks).
The Remote WebDriver allows the user to separate the tests from the browser, as well as to test browsers running on different machines with different OS requirements. As can be expected, a major disadvantage is that a remote server is required, and this requirement introduces latency (but when used with Selenium Grid and the ability to run multiple tests in parallel, the net savings could be way more than the latency introduced. The communication is done via HTTP and using the JSON Wire protocol.
Santiago then moved on to talk about the JSON Wire Protocol. The objective is said to be "keep the intent in the test, and the actions only go over the wire".
This approach excites me, in that it gives me a glimpse into how I could make my current test environment more extensible, and do more automated cross-browser tests. I also liked Santiago being willing to encourage people to get into the applications, downloading the code, and getting over the nervousness of working with it. The more knowledgeable all of us are of the code and its benefits (and limitations) the better our ability to leverage the features and actually make timely and meaningful changes. For the record, Santiago killed it in this talk. Very well organized, focused and engaging, and again, I felt like I understood the large majority of what was being discussed.
My thanks to Ashley Wilson and Sauce Labs for hosting these events. I really appreciate the benefits they provide to our community, and I appreciate the opportunity to meet my fellow developers and testers and learn from them. Here's hoping that 2012 will give me more opportunities to come out and participate more often.
Santiago Suarez Odonez led the discussion this evening, specifically focusing on the topic of "Stripping Down Remove WebDriver". This was a different kind of a talk as opposed to what I have usually seen. This talk was heavy on code and on implementation rather than how-to. For the first time at one of these things, I felt strangely... able to follow along. Many of the code examples that tended to feel very esoteric before were much more accessible. I actually felt like I was able to follow along at least 80% of the way (usually, I feel lost about halfway through these kind of talks).
The Remote WebDriver allows the user to separate the tests from the browser, as well as to test browsers running on different machines with different OS requirements. As can be expected, a major disadvantage is that a remote server is required, and this requirement introduces latency (but when used with Selenium Grid and the ability to run multiple tests in parallel, the net savings could be way more than the latency introduced. The communication is done via HTTP and using the JSON Wire protocol.
Santiago then moved on to talk about the JSON Wire Protocol. The objective is said to be "keep the intent in the test, and the actions only go over the wire".
This approach excites me, in that it gives me a glimpse into how I could make my current test environment more extensible, and do more automated cross-browser tests. I also liked Santiago being willing to encourage people to get into the applications, downloading the code, and getting over the nervousness of working with it. The more knowledgeable all of us are of the code and its benefits (and limitations) the better our ability to leverage the features and actually make timely and meaningful changes. For the record, Santiago killed it in this talk. Very well organized, focused and engaging, and again, I felt like I understood the large majority of what was being discussed.
My thanks to Ashley Wilson and Sauce Labs for hosting these events. I really appreciate the benefits they provide to our community, and I appreciate the opportunity to meet my fellow developers and testers and learn from them. Here's hoping that 2012 will give me more opportunities to come out and participate more often.
Subscribe to:
Posts (Atom)