Through the years, I have come across a number of books that I have used and valued. These may be new books, or they may be older ones. Each Wednesday, I will review a book that I personally feel would be worthwhile to testers.
Whittaker’s Attacks have become a common phrase in testing parlance. Many testers know about them and many testers use them. I reviewed the first How to Break book a couple of weeks ago. How to Break Web Software, written by Mike Andrews and James Whittaker, is book number 3 (for full disclosure, I do not own nor have I yet read book #2, How to Break Software Security, but it’s on my list of books to read and soon :) ).
Web development is a different beast as compared to typical software development. The ability to code a web site and deploy it is much faster that the similar effort to create a windows application that does many of the same things. Because of this, web development happens at a much more rapid pace. However, by its very nature, web development and its being online all the time makes for a very tempting target for hackers and those who mean to do harm to and exploit the system. How can you find out what your server’s response is to various attacks? That’s right, perform them yourself!
How to Break Web Software is structured much in the same vein as the first “How to Break Software” book. It presents an interesting area under the guise of “planning an attack”, and it give the user a chance to become familiar with the necessary attack style, and tools where relevant. So what are the specific attacks? Here they are, in order:
Attack 1: Panning for Gold (open up the web page source code listing and scan the page. What you find may often astound you)
Attack 2: Guessing Files and Directories
Attack 3: Holes Left by other People: Vulnerabilities in Sample Applications
Attack 4: Bypass Restriction on Input Choices
Attack 5: Bypass Client Side Validation
Attack 6: Look for Hidden Fields
Attack 7: Look for CGI Parameters in the URL
Attack 8: Create Cookie Poisoning
Attack 9: Perform URL Jumping During Transactions
Attack 10: Perform Session Hijacking
Attack 11: Access Cross Site Scripting (XSS)
Attack 12: Enter SQL Injection
Attack 13: Perform Directory Traversal
Attack 14: Create Buffer Overflows
Attack 15: Use Character Canonicalization to Get Around The System
Attack 16: Create NULL-String Attacks
Attack 17: Inject Stored Procedures
Attack 18: Inject Commands
Attack 19: Fingerprint the Server
Attack 20: Perform Denial of Service Attacks
Attack 21: Break Less than Adequate, Roll-Your-Own Cryptography Schemes
Attack 22: Break Authentication
Attack 23: Perform Cross-Site Tracing
Attack 24: Break the SSL Cipher
The book also comes with a CD and various tools that allow the user to install and experiment with the attacks (if the book calls for a particular attack, the tool is included on the CD). There’s also a bit of additional material towards the back of the book that seems to be repetitive and not entirely relevant to the book. The techniques themselves provide a level of interest and will have the reader curious to see what the next potential exploit will be.
Bottom Line:
For those starting out in Web Testing and would like to have an effective arsenal of tests to perform, How to Break Web Software will certainly fill that role, and provide a bevy of tools to use to assist in the process. More experienced testers may find some nuggets of wisdom in here as well, though I think the greater value is towards novice testers starting out with understanding web environments and web testing.
Wednesday, May 26, 2010
Thursday, May 20, 2010
Harnessing the Power of Procrastination
One of the things that I do to unwind and spend a little time to make my brain think on other things is I download podcasts. I have somewhere around 200 hours of podcasts on my Zen, and one that I return to over and over is Merlin Mann’s 43 Folder’s interview with David Allen (of “Getting Things Done” fame). During their interview sessions, I found David’s advice regarding procrastination to be very revealing. David tells Merlin what he does when he is procrastinating; he creates other projects to do that require attention, but don’t get to the heart of what he really should be doing. His explanation for this is that, if he does find that he is procrastinating, at least he is procrastinating productively.
I realize now why I find myself listening to this particular podcast over and over… I am a serial procrastinator. I don’t particularly want to be, but sometimes I just find it hard to get into that flow that I want so badly to get into. As I mentioned in a previous entry, my mind tends to want to wander, and while I find it helpful at times to let it do just that, there are other times where I just have to stop what I’m doing (or not doing) and get back into the rhythm of what I need to be doing.
To this end, again, I owe a debt of gratitude once again to Merlin Mann for talking about something that really does work for me if I allow myself to dive into it. It’s what he calls “Procrastination Hack (10+2)*5" or otherwise known as “work the dash and take the break”. Here’s how it works. Get a kitchen timer, or you can use your computer and use Outlook or some other time tracking tools. Whatever it takes, just get something where you can reliably measure off 10 minutes of your time. During those 10 minutes, focus your attention like a laser on the task at hand. Commit to removing any and all distractions that will take you away from your purpose for those 10 minutes. When the timer goes off, set it for two minutes, and take a break from that task. Whatever it is, focus on something else, but only for those two minutes. When the alarm goes off again, set the timer for 10 minutes again, and go back to that laser focus. Repeat this process 5 times. At the end of that 5th cycle, at the top of the hour (or your hour, whenever you started this) start the (10+2)*5 process again.
This may sound trite and silly, but for those who have challenges with keeping focused on less than pleasant tasks, whatever they may be, this is a great method to get more done than you might otherwise. Give it a shot and if it works for you, head over to 43Folders.com and tell Merlin how much he rocks (it’s his idea after all, I just really like using it :) ).
I realize now why I find myself listening to this particular podcast over and over… I am a serial procrastinator. I don’t particularly want to be, but sometimes I just find it hard to get into that flow that I want so badly to get into. As I mentioned in a previous entry, my mind tends to want to wander, and while I find it helpful at times to let it do just that, there are other times where I just have to stop what I’m doing (or not doing) and get back into the rhythm of what I need to be doing.
To this end, again, I owe a debt of gratitude once again to Merlin Mann for talking about something that really does work for me if I allow myself to dive into it. It’s what he calls “Procrastination Hack (10+2)*5" or otherwise known as “work the dash and take the break”. Here’s how it works. Get a kitchen timer, or you can use your computer and use Outlook or some other time tracking tools. Whatever it takes, just get something where you can reliably measure off 10 minutes of your time. During those 10 minutes, focus your attention like a laser on the task at hand. Commit to removing any and all distractions that will take you away from your purpose for those 10 minutes. When the timer goes off, set it for two minutes, and take a break from that task. Whatever it is, focus on something else, but only for those two minutes. When the alarm goes off again, set the timer for 10 minutes again, and go back to that laser focus. Repeat this process 5 times. At the end of that 5th cycle, at the top of the hour (or your hour, whenever you started this) start the (10+2)*5 process again.
This may sound trite and silly, but for those who have challenges with keeping focused on less than pleasant tasks, whatever they may be, this is a great method to get more done than you might otherwise. Give it a shot and if it works for you, head over to 43Folders.com and tell Merlin how much he rocks (it’s his idea after all, I just really like using it :) ).
Wednesday, May 19, 2010
Wednesday Book Review: It’s Called Work for a Reason
Through the years, I have come across a number of books that I have used and valued. These may be new books, or they may be older ones. Each Wednesday, I will review a book that I personally feel would be worthwhile to testers.
I’m going to take a different tack with today’s book review. This book has little specifically to do with testing, but it has a lot to do with setting a demeanor and an approach where people will take you seriously in the workplace, regardless of your chosen career. Let’s face it, it’s a rare person who gets to do what they love to do all the time every day in their work lives. Much of the time, we have to deal with issues, situations, people, and projects that will try our patience, give us frustrations, and make us wish we were doing anything but that. Many people will find themselves fighting with these aspects while they seek out the “dream gig”. Larry Winget has some strong words for those of us who are seeking. The problem isn’t the job. The problem is us.
For those not familiar with Larry Winget, he’s made a living touting himself as the worlds greatest “Irritational” speaker. He’s loud, crass, occasionally rude, often very funny, but one thing always rings out with Larry; he tells it like it is and he speaks from his core and his gut. He’s not going to say things to make people happy. He says what he feels people need to hear.
This is a book you will enjoy in parts and want to throw at the wall in others. When you find yourself wanting to throw it, pay close attention … chances are, something has just hit close to home, and you may want to start giving serious consideration to what you have just read (that’s my recommendation, in any event, based on cold, hard experience here :) ).
The core premise -- work is WORK. It is not a social club. Companies are loaded with people who add little value beyond their just being there. Truth be told, on any given day, we may be those exact people. Larry shouts it in a way that is absolutely impossible to deny. At the same time, many companies disrespect and undervalue the employees they do have (“just shut up, do your work, and be glad you are getting a paycheck”), which tends to exacerbate the situation.
Both sides get the “Larry Winget” treatment, which is a blunt and direct upbraiding and shout-down as to what not to do and how to recognize when you are doing it. There’s not a lot of fresh or amazing insights in this book. Well then, what’s the point in reading it? The point is that many of us need a good kick to the backside every now and then. Much of this book is common sense and proper etiquette and professionalism. Much of this stuff is going to seem very basic and obvious. You will sit back and chuckle and say “well, I’m already doing that. What’s the value in here for me?” My guess is that the value will be when you reach the sections where you’re going to want to throw the book across the room. Is it likely you will find yourself in all of the sections? Probably not. Is it likely you will find yourself in some of the things Larry mentions? Definitely.
This book dovetails well with a book I reviewed a few weeks back, Linchpin by Seth Godin. Godin makes the case that people should strive to make themselves indispensible to their workplace. Larry says the same thing, albeit with a different focus. Linchpin tells us we need to be indispensible and why its important, WORK tells us to quit whining, get on the ball, make up our minds, and do something to make ourselves indispensable.
My personal favorite section of the book is what Larry refers to as the “Dirty Dozen Employee Handbook”. This distills the essence of the book for what an individual contributor can do very well.
1. Focus on accomplishment. Be known as the person who gets things done.
2. Develop a reputation you are proud of.
3. Be trustworthy. Be the person who can keep a secret, isn’t a gossip, and can be counted on in all situations.
4. When you give your word, keep it. Without exception.
5. Be on time. Be where you are supposed to be when you are supposed to be there.
6. Don’t brag. It’s obnoxious and it alienates others.
7. Don’t complain. No one cares, and they have problems of their own to deal with.
8. Friendship among coworkers is a bonus. It is not required or to be expected.
9. Don’t tolerate abuse, disrespect, or a lack of ethics or integrity from your employer. Life is short; there are other jobs.
10. Find out what the single most important thing is about your job, and then make sure it gets done. If nothing else gets done, make sure that one thing gets done.
11. Serve the customer well whether you call the customer a client, patient, coworker, or boss (or programmer or stakeholder for us who test). Your rewards in life are in direct proportion to the service you provide.
12. Remember that you work for someone. That person has the right to say what you do, when you do it and how you do it.
Bottom Line:
Larry is blunt, edgy, abrasive, and loud. If you really want to get the full flavor of this title, I’d suggest getting the audio version, as you can hear Larry’s intonation and comments in all of their rustic glory. I’ll be frank, Larry’s an acquired taste, and if you don’t like gruff and brash straight talk about a variety of things, this book may turn you off. If however you can deal with a splash of cold water, and like a delivery rich with accountability and self responsibility, yes, I highly recommend this book. For anyone who wants to get some straight, non sugared, non watered down talk about how to get in, take command of whatever level you work at, and do all that you can to make the most of it, I’m willing to bet you’ll enjoy this book… even if you feel like throwing it across the room a time or two :).
I’m going to take a different tack with today’s book review. This book has little specifically to do with testing, but it has a lot to do with setting a demeanor and an approach where people will take you seriously in the workplace, regardless of your chosen career. Let’s face it, it’s a rare person who gets to do what they love to do all the time every day in their work lives. Much of the time, we have to deal with issues, situations, people, and projects that will try our patience, give us frustrations, and make us wish we were doing anything but that. Many people will find themselves fighting with these aspects while they seek out the “dream gig”. Larry Winget has some strong words for those of us who are seeking. The problem isn’t the job. The problem is us.
For those not familiar with Larry Winget, he’s made a living touting himself as the worlds greatest “Irritational” speaker. He’s loud, crass, occasionally rude, often very funny, but one thing always rings out with Larry; he tells it like it is and he speaks from his core and his gut. He’s not going to say things to make people happy. He says what he feels people need to hear.
This is a book you will enjoy in parts and want to throw at the wall in others. When you find yourself wanting to throw it, pay close attention … chances are, something has just hit close to home, and you may want to start giving serious consideration to what you have just read (that’s my recommendation, in any event, based on cold, hard experience here :) ).
The core premise -- work is WORK. It is not a social club. Companies are loaded with people who add little value beyond their just being there. Truth be told, on any given day, we may be those exact people. Larry shouts it in a way that is absolutely impossible to deny. At the same time, many companies disrespect and undervalue the employees they do have (“just shut up, do your work, and be glad you are getting a paycheck”), which tends to exacerbate the situation.
Both sides get the “Larry Winget” treatment, which is a blunt and direct upbraiding and shout-down as to what not to do and how to recognize when you are doing it. There’s not a lot of fresh or amazing insights in this book. Well then, what’s the point in reading it? The point is that many of us need a good kick to the backside every now and then. Much of this book is common sense and proper etiquette and professionalism. Much of this stuff is going to seem very basic and obvious. You will sit back and chuckle and say “well, I’m already doing that. What’s the value in here for me?” My guess is that the value will be when you reach the sections where you’re going to want to throw the book across the room. Is it likely you will find yourself in all of the sections? Probably not. Is it likely you will find yourself in some of the things Larry mentions? Definitely.
This book dovetails well with a book I reviewed a few weeks back, Linchpin by Seth Godin. Godin makes the case that people should strive to make themselves indispensible to their workplace. Larry says the same thing, albeit with a different focus. Linchpin tells us we need to be indispensible and why its important, WORK tells us to quit whining, get on the ball, make up our minds, and do something to make ourselves indispensable.
My personal favorite section of the book is what Larry refers to as the “Dirty Dozen Employee Handbook”. This distills the essence of the book for what an individual contributor can do very well.
1. Focus on accomplishment. Be known as the person who gets things done.
2. Develop a reputation you are proud of.
3. Be trustworthy. Be the person who can keep a secret, isn’t a gossip, and can be counted on in all situations.
4. When you give your word, keep it. Without exception.
5. Be on time. Be where you are supposed to be when you are supposed to be there.
6. Don’t brag. It’s obnoxious and it alienates others.
7. Don’t complain. No one cares, and they have problems of their own to deal with.
8. Friendship among coworkers is a bonus. It is not required or to be expected.
9. Don’t tolerate abuse, disrespect, or a lack of ethics or integrity from your employer. Life is short; there are other jobs.
10. Find out what the single most important thing is about your job, and then make sure it gets done. If nothing else gets done, make sure that one thing gets done.
11. Serve the customer well whether you call the customer a client, patient, coworker, or boss (or programmer or stakeholder for us who test). Your rewards in life are in direct proportion to the service you provide.
12. Remember that you work for someone. That person has the right to say what you do, when you do it and how you do it.
Bottom Line:
Larry is blunt, edgy, abrasive, and loud. If you really want to get the full flavor of this title, I’d suggest getting the audio version, as you can hear Larry’s intonation and comments in all of their rustic glory. I’ll be frank, Larry’s an acquired taste, and if you don’t like gruff and brash straight talk about a variety of things, this book may turn you off. If however you can deal with a splash of cold water, and like a delivery rich with accountability and self responsibility, yes, I highly recommend this book. For anyone who wants to get some straight, non sugared, non watered down talk about how to get in, take command of whatever level you work at, and do all that you can to make the most of it, I’m willing to bet you’ll enjoy this book… even if you feel like throwing it across the room a time or two :).
Tuesday, May 18, 2010
The Human Face of “Mission Critical”
One of the interesting aspects of testing is that we have different ranges and levels of testing that need to be performed. On one end, there is the somewhat trivial level of testing a vanity web site to make sure that information is displayed correctly. If it is wrong, it’s a nuisance, but most people’s lives will not be greatly impacted. On the other side, there is “mission critical” applications, those that, if something goes wrong, can have devastating impact, i.e. loss of life.
This point was brought home to me last night as I was discussing one of my assignments I’m working on fir the Black Box Software Testing: Foundations class I’m currently working on through the Association for Software Testing. As we have discussed those applications that are mission critical, I’ve realized that, perhaps outside of testing routers for Cisco, most of the applications I have worked with would not have a direct effect on people’s lives if they were to stop working (outside of the annoyance factor, virtualization software, capacitance touch devices, video games and immigration software are not genuinely life or death issues). My father, on the other hand, actually did work on application programs that were truly mission critical, in the sense that they had to do with the mixture of drugs and medications used in a neonatal intensive care nursery.
My father lived a dual life in the medical field. On one end, he was a pediatric physician (now somewhat retired, but he volunteers with a free clinic a few nights a week still). At the same time, he was and still is an avid programmer (he dates back to the days of the warehouse sized supercomputers and paper punch card process loads). During his years as an active physician, he always wanted to help bring the power of computing and calculation to the hospital floor, and he worked hard to develop programs that did this. Interestingly enough, some of the programs he wrote in the late 70s and early 80s are still being used today.
While I was talking about testing needs and the idea of what constitutes “good enough” testing, he shared a story with me about a critical bug that was found nine years after a program had been coded, tested and put into regular use. This program took readings from monitoring equipment used in the pediatric Intensive Care Unit and, by entering data about the patient, their vitals, and monitoring other conditions status details like heart rate, pulse and breathing, the doctor or nurse would enter in values and the program would calculate a mixture of Intravenous Fluid that would combine necessary nutrients and medications to be mixed for that patient (most of which were premature babies or other infants with early health issues). My dad said they discovered that, due to a variable that in a very rare instance could get overwritten due to a loop in the code, that a dangerous amount of potassium had been mixed and the effect could have resulted in the infant’s death had it not been caught in time.
This was a discovery made after nine years of active use, in which the system worked (as it would have seemed at the time) flawlessly for many thousand patients over nearly a decade. My dad went in and fixed the code, the nursing staff ran tests, the group reviewed the work and determined they could put it back into production, and that program is still in use today, 19 years after that discovery.
We have to realize that we can test for many days, weeks or months, and we can be aggressive and focused, but we cannot test everything and we cannot guarantee that all issues are addressed. In many cases, good enough is OK, but there are some instances where good enough just won’t do. My dad’s story of such an incident was sobering, and it gave me a clear picture of the human costs of a difficult to find error. I may never get the chance to test something that would be this important to the lives of other people, but if I do, I will remember this story from my dad, and the definite human price that can be paid when even seemingly simple programs that have been used for years ultimately show a problem.
This point was brought home to me last night as I was discussing one of my assignments I’m working on fir the Black Box Software Testing: Foundations class I’m currently working on through the Association for Software Testing. As we have discussed those applications that are mission critical, I’ve realized that, perhaps outside of testing routers for Cisco, most of the applications I have worked with would not have a direct effect on people’s lives if they were to stop working (outside of the annoyance factor, virtualization software, capacitance touch devices, video games and immigration software are not genuinely life or death issues). My father, on the other hand, actually did work on application programs that were truly mission critical, in the sense that they had to do with the mixture of drugs and medications used in a neonatal intensive care nursery.
My father lived a dual life in the medical field. On one end, he was a pediatric physician (now somewhat retired, but he volunteers with a free clinic a few nights a week still). At the same time, he was and still is an avid programmer (he dates back to the days of the warehouse sized supercomputers and paper punch card process loads). During his years as an active physician, he always wanted to help bring the power of computing and calculation to the hospital floor, and he worked hard to develop programs that did this. Interestingly enough, some of the programs he wrote in the late 70s and early 80s are still being used today.
While I was talking about testing needs and the idea of what constitutes “good enough” testing, he shared a story with me about a critical bug that was found nine years after a program had been coded, tested and put into regular use. This program took readings from monitoring equipment used in the pediatric Intensive Care Unit and, by entering data about the patient, their vitals, and monitoring other conditions status details like heart rate, pulse and breathing, the doctor or nurse would enter in values and the program would calculate a mixture of Intravenous Fluid that would combine necessary nutrients and medications to be mixed for that patient (most of which were premature babies or other infants with early health issues). My dad said they discovered that, due to a variable that in a very rare instance could get overwritten due to a loop in the code, that a dangerous amount of potassium had been mixed and the effect could have resulted in the infant’s death had it not been caught in time.
This was a discovery made after nine years of active use, in which the system worked (as it would have seemed at the time) flawlessly for many thousand patients over nearly a decade. My dad went in and fixed the code, the nursing staff ran tests, the group reviewed the work and determined they could put it back into production, and that program is still in use today, 19 years after that discovery.
We have to realize that we can test for many days, weeks or months, and we can be aggressive and focused, but we cannot test everything and we cannot guarantee that all issues are addressed. In many cases, good enough is OK, but there are some instances where good enough just won’t do. My dad’s story of such an incident was sobering, and it gave me a clear picture of the human costs of a difficult to find error. I may never get the chance to test something that would be this important to the lives of other people, but if I do, I will remember this story from my dad, and the definite human price that can be paid when even seemingly simple programs that have been used for years ultimately show a problem.
Monday, May 17, 2010
Testing “The Grizzly”
This Saturday, I had a chance to get away from things for awhile and spent the day with my older daughter at California’s Great America (which has gone through many ownership changes since its inception in 1976 and my first visit there). I had fun showing her which rides were original (albeit with different names) and which had come in the last decade or so.
It was while we were waiting in line for The Grizzly (an old fashioned wood frame roller coaster ride, complete with rumble and bang that makes those ride so jarring and fun) that the ride was stopped and the people in line were told “the Grizzly is experiencing technical difficulties, you may wait until it is resolved, or you may exit the way you came". 90% of the people in line exited, but 10% of the people stayed put. My daughter and I were in the pole position for the front car (it went on standby just before we were to load). Having waited 45 minutes to get to that point, I suggested that we should wait it out and see how long it will take.
Part of this was my teaching my daughter the virtue of patience, but I must confess I had an ulterior motive. I’d never been this close to the main dispatch when a problem had occurred. I was really curious to see what it was, and what would resolve it. While I can’t be 100% sure, as the people on the other side of the track were far enough away that I couldn’t hear them, I could hear that there was an issue with the braking and advancing system (the one that queues cars up for loading and then for pushing cars onto the chain track was not triggering correctly; cars would get in line for loading onto the chain and they would stop).
The steps for getting the system back into operation were interesting:
• The drive attendant makes announcement, and then radio'd someone.
• The more senior ride attendant (I’m guessing) comes and looks over the console with the first ride attendant. The senior attendant pokes and prods a few things.
• Both go to both sides of the track and push a few buttons. Nothing happens.
• Another radio call. This time an older person with a jacket appears (ride management? Not sure) and *they* look over both consoles. The lady with the jacket hands them an orange piece of paper (their trouble ticket system?) Someone triggers a switch and the car that was stalled on the track (with people in it) was released to be added to the chain (this is a fully gravity driven coaster; one chain ride and the rest is all done with the ups and the downs of the track).
• The car in the waiting queue is let through (without people) and sent to the spot to be loaded on to the chain. No problems. The returning car with people gets back to the spot, the car is unloaded, and then the cars are run through the system a few times. After about 4 loads on the chain, the cars stop short of the load point.
• Another call. This time a guy with a pair of aviator sunglasses and a toolbox arrives. OK, we now are assured something is wrong. The guy with the toolbox hands the lady in the jacket a blue piece of paper.
• Guy with glasses asks a few questions from both sides of the track, releases what must be an override to load the stuck car onto the carry chain, and the process repeats, this time letting 6 cars load without people until it sticks again.
• At this point, glasses guy scratches his chin, goes over to the rear console (where the back of the train would be (unscrews the top of the console plate, looks at a few wires, then puts the plate down and unscrews the microphone used for the ride operations to communicate with the crowd.
• Four more car trains are allowed to pass through, both pieces of paper (orange and blue) were handed to the lady in the jacket, and with that, the attendant announces “Welcome to the Grizzly” and everyone waiting gets to get in the cars and ride the ride.
The total amount of time from ride stop to restart turned out to be twenty minutes, but I found it to be rather instructive. First, it was clear that there was a thought that the problem was with something on the track. The tests they ran and the order they ran them first pointed to that. They took the time to get a number of opinions of the issue, and someone with authority to make a decision (I’m guessing the Orange paper was either the acknowledgment of a problem or proof that they were allowed to override the controls). Through additional testing, a senior mechanic was called, and through some quick determination, and a blue piece of paper, her went over and determined there was a short in the communication relay (how this was affecting the track control, I honestly don’t know, but I would have loved to have had a chance to chat with the mechanic to see how he determined that), and then a confirmation that the system was working as expected after the change.
A simple 20 minute wait gave me insights into the ways and methods that Great America tests and troubleshoots their rides, and it also gave me a great feeling that there were people there that knew what they were doing, and could do it quickly. I love Great America for lots of nostalgic reasons. Having a glimpse into their quality assurance and control mechanism make me like them just a little bit more today :).
It was while we were waiting in line for The Grizzly (an old fashioned wood frame roller coaster ride, complete with rumble and bang that makes those ride so jarring and fun) that the ride was stopped and the people in line were told “the Grizzly is experiencing technical difficulties, you may wait until it is resolved, or you may exit the way you came". 90% of the people in line exited, but 10% of the people stayed put. My daughter and I were in the pole position for the front car (it went on standby just before we were to load). Having waited 45 minutes to get to that point, I suggested that we should wait it out and see how long it will take.
Part of this was my teaching my daughter the virtue of patience, but I must confess I had an ulterior motive. I’d never been this close to the main dispatch when a problem had occurred. I was really curious to see what it was, and what would resolve it. While I can’t be 100% sure, as the people on the other side of the track were far enough away that I couldn’t hear them, I could hear that there was an issue with the braking and advancing system (the one that queues cars up for loading and then for pushing cars onto the chain track was not triggering correctly; cars would get in line for loading onto the chain and they would stop).
The steps for getting the system back into operation were interesting:
• The drive attendant makes announcement, and then radio'd someone.
• The more senior ride attendant (I’m guessing) comes and looks over the console with the first ride attendant. The senior attendant pokes and prods a few things.
• Both go to both sides of the track and push a few buttons. Nothing happens.
• Another radio call. This time an older person with a jacket appears (ride management? Not sure) and *they* look over both consoles. The lady with the jacket hands them an orange piece of paper (their trouble ticket system?) Someone triggers a switch and the car that was stalled on the track (with people in it) was released to be added to the chain (this is a fully gravity driven coaster; one chain ride and the rest is all done with the ups and the downs of the track).
• The car in the waiting queue is let through (without people) and sent to the spot to be loaded on to the chain. No problems. The returning car with people gets back to the spot, the car is unloaded, and then the cars are run through the system a few times. After about 4 loads on the chain, the cars stop short of the load point.
• Another call. This time a guy with a pair of aviator sunglasses and a toolbox arrives. OK, we now are assured something is wrong. The guy with the toolbox hands the lady in the jacket a blue piece of paper.
• Guy with glasses asks a few questions from both sides of the track, releases what must be an override to load the stuck car onto the carry chain, and the process repeats, this time letting 6 cars load without people until it sticks again.
• At this point, glasses guy scratches his chin, goes over to the rear console (where the back of the train would be (unscrews the top of the console plate, looks at a few wires, then puts the plate down and unscrews the microphone used for the ride operations to communicate with the crowd.
• Four more car trains are allowed to pass through, both pieces of paper (orange and blue) were handed to the lady in the jacket, and with that, the attendant announces “Welcome to the Grizzly” and everyone waiting gets to get in the cars and ride the ride.
The total amount of time from ride stop to restart turned out to be twenty minutes, but I found it to be rather instructive. First, it was clear that there was a thought that the problem was with something on the track. The tests they ran and the order they ran them first pointed to that. They took the time to get a number of opinions of the issue, and someone with authority to make a decision (I’m guessing the Orange paper was either the acknowledgment of a problem or proof that they were allowed to override the controls). Through additional testing, a senior mechanic was called, and through some quick determination, and a blue piece of paper, her went over and determined there was a short in the communication relay (how this was affecting the track control, I honestly don’t know, but I would have loved to have had a chance to chat with the mechanic to see how he determined that), and then a confirmation that the system was working as expected after the change.
A simple 20 minute wait gave me insights into the ways and methods that Great America tests and troubleshoots their rides, and it also gave me a great feeling that there were people there that knew what they were doing, and could do it quickly. I love Great America for lots of nostalgic reasons. Having a glimpse into their quality assurance and control mechanism make me like them just a little bit more today :).
Friday, May 14, 2010
What Does Q.A. Mean To Me?
I think in the testing world, this is the most bandied about question that I have heard discussed, debated, and argued. Since I purport to have a blog dedicated to talking about testing, it’s only fair that I go on the record with my thoughts on this.
First and foremost, Quality Assurance is a nebulous description for testers, and is in many ways not helpful. I am opposed to the idea of a “Quality Assurance Team” that is separate from development (put down the pitchforks, people, lemme’ ‘splain!). Quality Assurance is an empty promise; we cannot “ensure” quality. All that we can do is point out issues and find issues with a product and call into question its quality. That’s it. We cannot magically bake quality into a product. We cannot wave a magic want and exorcise bugs from a program. We can point out to developers issues that we find when we test.
Quality Assurance is not just my team’s job. Rather, it has to be the mission of the entire company and a dedication to making sure that we all spent the time and the energy to make sure that there is as few issues in a product to be released as possible. Testers provide an indication as to how well the company is achieving that goal. Rather than a gate (or my favorite overly used and abused metaphor, the “bug shield”), we are more closely aligned with the function of a gauge. Instead of looking at software as buggy data that drops into QA as though it were a function, and that we magically cleanse the code and bug free software comes out the other side, we can tell the story of what we have seen and give the company and development team information that says “here is where we are at”. The tester tells a story, and gives information to show the state of the application. From there, the developers can then decide what they want to do based on the information (using a GPS as an example, they can stop, turn around, and make changes to continue forward, or they can just keep moving forward).
Regardless of my personal feelings as to what my role is and how I would like to see myself in that role, the truth is, whether I like it or not, most other people in an organization do look at the QA tester or the QA team as “the last tackle on the field”. In my current environment, yes, that is the case, and it requires me to be very strategic and creative. While I may not be the one who put a problem in, I will certainly catch a fair share of the heat if a customer discovers the problem. Thus I have to embrace the fact that, whether or not I like or appreciate the “bug shield” metaphor, it’s the role that others see me playing, and I cannot just abandon it.
So what can we do? What is our mission, our real value to the organization? What’s the bottom line of what we offer? In general, my answer to this is that “I save the company money”. Every bug that I find, whether it is major or minor, has a hand in helping to determine whether or not a customer stays a customer, or talks about our product in good or bad terms, or purchases another seat for their company or “makes do” for the time being. It can be tricky to measure, and it’s not a hard and fast as a sale vs. no sale, but it does help to make clear what we as an organization provide (and in this case the we means me; remember, I’m a lone gun at the moment, but I have hopes that may change at some point). How about you? Where do you see yourself in the Q.A. picture?
First and foremost, Quality Assurance is a nebulous description for testers, and is in many ways not helpful. I am opposed to the idea of a “Quality Assurance Team” that is separate from development (put down the pitchforks, people, lemme’ ‘splain!). Quality Assurance is an empty promise; we cannot “ensure” quality. All that we can do is point out issues and find issues with a product and call into question its quality. That’s it. We cannot magically bake quality into a product. We cannot wave a magic want and exorcise bugs from a program. We can point out to developers issues that we find when we test.
Quality Assurance is not just my team’s job. Rather, it has to be the mission of the entire company and a dedication to making sure that we all spent the time and the energy to make sure that there is as few issues in a product to be released as possible. Testers provide an indication as to how well the company is achieving that goal. Rather than a gate (or my favorite overly used and abused metaphor, the “bug shield”), we are more closely aligned with the function of a gauge. Instead of looking at software as buggy data that drops into QA as though it were a function, and that we magically cleanse the code and bug free software comes out the other side, we can tell the story of what we have seen and give the company and development team information that says “here is where we are at”. The tester tells a story, and gives information to show the state of the application. From there, the developers can then decide what they want to do based on the information (using a GPS as an example, they can stop, turn around, and make changes to continue forward, or they can just keep moving forward).
Regardless of my personal feelings as to what my role is and how I would like to see myself in that role, the truth is, whether I like it or not, most other people in an organization do look at the QA tester or the QA team as “the last tackle on the field”. In my current environment, yes, that is the case, and it requires me to be very strategic and creative. While I may not be the one who put a problem in, I will certainly catch a fair share of the heat if a customer discovers the problem. Thus I have to embrace the fact that, whether or not I like or appreciate the “bug shield” metaphor, it’s the role that others see me playing, and I cannot just abandon it.
So what can we do? What is our mission, our real value to the organization? What’s the bottom line of what we offer? In general, my answer to this is that “I save the company money”. Every bug that I find, whether it is major or minor, has a hand in helping to determine whether or not a customer stays a customer, or talks about our product in good or bad terms, or purchases another seat for their company or “makes do” for the time being. It can be tricky to measure, and it’s not a hard and fast as a sale vs. no sale, but it does help to make clear what we as an organization provide (and in this case the we means me; remember, I’m a lone gun at the moment, but I have hopes that may change at some point). How about you? Where do you see yourself in the Q.A. picture?
Thursday, May 13, 2010
Talents Can Be Strange Allies
I remember reading the job description with a bit of amusement...
Wanted: software tester for game development and publishing company. Strong attention to detail required, experience with defect tracking systems, ability to work with small team, etc., the standard things we see on most QA job postings... and one really interesting requirement:
"Excellent singing voice a major plus!"
Huh?!
This was a listing on Craigslist, so it didn't have the company name or any details about why this was a requirement. Still, I figured I had nothing to lose, so I sent the following (paraphrased, it's been 7 years ;) ):
Hello, I would be interested in knowing more about this position. For QA Experience [...] and as far as excellent singing voice, well, I guess that is subjective, but I was a professional singer in the Bay Area for close to a decade [and I gave them a link to some High Wire footage and MP3's].
If you consider that "Excellent", then yes,I would love to know more about what this project would be".
I received a call two days after writing back, and an interview the next day. It turns out that the company was Konami Digital Entertainment (a Japanese video game publisher) and the project was a joint development between Harmonix (a Boston based game company) and Konami called "Karaoke Revolution". This was to be the beginning of a long running franchise for Konami, and while they had a number of software testers who could test the components of the game as related to navigation and controls, the one area they couldn't simulate was the real interaction of a person singing into a microphone to gauge performance at various difficulty levels. This was where I and a number of other testers with an interestingly specialized knowledge came in. For the two years that I worked with Konami, I had other test opportunities and other titles I worked on, but I became synonymous with Karaoke Revolution because I could test the game at the expert mode level more consistently than anyone else. I still have the promotional box signed by all of the members of the team, and I smile each time I see the producer's signature and comment... "three cheers to Mr. Expert Mode!" :). I was also to have a "little dream" come true when I was there. If you get the PS2 disk of Karaoke Revolution Volume 3, or you subscribe to the Xbox Live network, if you play the game and sing along to "China Grove"... the guide vocal you are singing with is yours truly ;).
So why am I sharing this story with you today? Because there are a lot of things that go into making us testers. All of our talents, passions and things that we love to do help to inform out testing. If you have a particular talent, seriously consider exploring it as part of your testing development. I enjoy playing guitar, and as such, I find the various software applications related to guitar and teaching guitar interesting. Yes, playing guitar can inform your testing, because you can bring a domain knowledge to a product and, based on that knowledge, you can see and recognize issues that may go unnoticed by people who are good testers, who are passionate about software, but may not have that key talent you possess.
So consider this a way of encouraging all of you other testers out there. Look to what really makes you passionate. What hobby, activity, or talent do you have? Identify it, and try to tease out a few paragraphs about why you like it so much. Once you have done that, look for software products and projects that serve that particular talent. Many of these may be open source or collaborative projects, so contact the makers of the software or service, tell them about your hobby and that you are a tester, and see where the conversation leads. You may find that your passion for your hobby or talent could provide valuable information for makers of applications that affect something you love, and quite possibly, could create something better that you enjoy interacting with (increasing that passion) and allowing that company or group to create a better application for others. Win/Win.
So the next time you find yourself thinking about something you love to do, or a latent talent you may have, give some consideration to looking at what you love about it and what might be out there that you could hone both your test skills and your muse. While such exploration may not net you your next dream job, who knows, it just might :).
Wanted: software tester for game development and publishing company. Strong attention to detail required, experience with defect tracking systems, ability to work with small team, etc., the standard things we see on most QA job postings... and one really interesting requirement:
"Excellent singing voice a major plus!"
Huh?!
This was a listing on Craigslist, so it didn't have the company name or any details about why this was a requirement. Still, I figured I had nothing to lose, so I sent the following (paraphrased, it's been 7 years ;) ):
Hello, I would be interested in knowing more about this position. For QA Experience [...] and as far as excellent singing voice, well, I guess that is subjective, but I was a professional singer in the Bay Area for close to a decade [and I gave them a link to some High Wire footage and MP3's].
If you consider that "Excellent", then yes,I would love to know more about what this project would be".
I received a call two days after writing back, and an interview the next day. It turns out that the company was Konami Digital Entertainment (a Japanese video game publisher) and the project was a joint development between Harmonix (a Boston based game company) and Konami called "Karaoke Revolution". This was to be the beginning of a long running franchise for Konami, and while they had a number of software testers who could test the components of the game as related to navigation and controls, the one area they couldn't simulate was the real interaction of a person singing into a microphone to gauge performance at various difficulty levels. This was where I and a number of other testers with an interestingly specialized knowledge came in. For the two years that I worked with Konami, I had other test opportunities and other titles I worked on, but I became synonymous with Karaoke Revolution because I could test the game at the expert mode level more consistently than anyone else. I still have the promotional box signed by all of the members of the team, and I smile each time I see the producer's signature and comment... "three cheers to Mr. Expert Mode!" :). I was also to have a "little dream" come true when I was there. If you get the PS2 disk of Karaoke Revolution Volume 3, or you subscribe to the Xbox Live network, if you play the game and sing along to "China Grove"... the guide vocal you are singing with is yours truly ;).
So why am I sharing this story with you today? Because there are a lot of things that go into making us testers. All of our talents, passions and things that we love to do help to inform out testing. If you have a particular talent, seriously consider exploring it as part of your testing development. I enjoy playing guitar, and as such, I find the various software applications related to guitar and teaching guitar interesting. Yes, playing guitar can inform your testing, because you can bring a domain knowledge to a product and, based on that knowledge, you can see and recognize issues that may go unnoticed by people who are good testers, who are passionate about software, but may not have that key talent you possess.
So consider this a way of encouraging all of you other testers out there. Look to what really makes you passionate. What hobby, activity, or talent do you have? Identify it, and try to tease out a few paragraphs about why you like it so much. Once you have done that, look for software products and projects that serve that particular talent. Many of these may be open source or collaborative projects, so contact the makers of the software or service, tell them about your hobby and that you are a tester, and see where the conversation leads. You may find that your passion for your hobby or talent could provide valuable information for makers of applications that affect something you love, and quite possibly, could create something better that you enjoy interacting with (increasing that passion) and allowing that company or group to create a better application for others. Win/Win.
So the next time you find yourself thinking about something you love to do, or a latent talent you may have, give some consideration to looking at what you love about it and what might be out there that you could hone both your test skills and your muse. While such exploration may not net you your next dream job, who knows, it just might :).
Wednesday, May 12, 2010
Wednesday Book Review: How to Break Software
Through the years, I have come across a number of books that I have used and valued. These may be new books, or they may be older ones. Each Wednesday, I will review a book that I personally feel would be worthwhile to testers.
Today's book review is a bit retro. James Whittaker has written a trio of “How to Break” books. This one is the first, published in 2003, and for me was a great shot in the arm to look at some different approaches to testing. First and foremost, this is not a theoretical book. It’s a practical book filled with “how to apply this stuff in the real world”. Whittaker designed the book around the concept of “waging war” on software, and the book has sections that describe “attacks” that can be applied to a software application. Why attacks? Because Whittaker felt it would make testing more fun... and he's right! IMO, the attack metaphor does make testing more fun. If you listen to any software quality podcasts and you hear the term “Whittaker Attacks”, this is the book that contains and describes them. So does James Whittaker offer a “battle plan” that’s all that? Let’s take a look.
In How to Break Software, James makes the case for creating a “Fault Model”. This way, we can determine what approach we want to take when testing an application. The idea for the fault model comes from the way that the user interacts with an application, and the way that a system interacts with an application. The basic idea is that:
Once the user understands where they interact with the system and how to interact with it, they can then “go and explore”. The approach to testing recommended is to “wage war” and “attack” the software.
Each attack mentioned in the book is presented and structured the same way. The attack is first named, and then Whittaker says when to apply the attack, what software fault model makes the attack successful, how to determine of the attack exposes failures, and how to actually conduct the attack. Each example shows a real world application under test and how the bug was triggered. This takes the idea of testing software out of the theory books and gives users a direct hands-on method to trying it out for themselves.
Talking about each of the attacks would take up way more room that a review would realistically allow, but I have listed them below so that you can think about them and how you might use them in your own tests.
User Interface Attacks
Attack 1: Apply inputs that force all error messages to appear.
Attack 2: Apply inputs that force the software to establish default values.
Attack 3: Explore allowable character sets and data types.
Attack 4: Overflow input buffers.
Attack 5: Find inputs that may interact and test combinations of their values.
Attack 6: Repeat the same input or series of inputs numerous times.
Attack 7: Force different outputs to be generated for each input.
Attack 8: Force invalid outputs to be generated.
Attack 9: Force properties of an output to change.
Attack 10: Force the screen to refresh.
Attack 11: Apply inputs using a variety of initial conditions.
Attack 12: Force a data structure to store too many or too few values.
Attack 13: Investigate alternative ways to modify internal data constraints.
Attack 14: Experiment with invalid operand and operator combinations.
Attack 15: Force a function to call itself recursively.
Attack 16: Force computation results to be too large or too small.
Attack 17: Find features that share data or interact poorly.
System Interface Attacks
Attack 1: Fill the file system to its capacity.
Attack 2: Force the media to be busy or unavailable.
Attack 3: Damage the media.
Attack 4: Assign an invalid file name.
Attack 5: Vary file access permissions.
Attack 6: Vary or corrupt file contents.
In addition to talking about the techniques, there are also some tools included with the book on the CD that comes with it (Canned Heat and Holodeck). Many of the examples in the book (especially the system interface attacks) use these tools to conduct them. However, the ideas behind the attacks can be transferred to other tools. Also, while many of the examples displayed are for MS Windows software, and the tools are Windows based, again, don’t focus too much on the actual tools, and use the attacks as a metaphor and ideas to test with.
The biggest value with these attacks is that it gives a fairly simple framework for testers to use when looking at any program or application. Programs accept input, display output, interact with the file system, and access system resources and peripherals. Knowing just these details and nothing else about the program gives the tester plenty of areas to explore.
Bottom Line:
This is a great introduction to some practical testing techniques, and can help build skills and ideas for testers both old and new. The information is basic and is presented in a way to be accessible to both beginners and veteran testers. The new testers will appreciate the quick “get into it and get effective quickly” aspect of the book. Seasoned testers will appreciate the methodology and a few new tricks they might not have considered.
What needs to be made clear is that this is not a book that will give the reader all they need to go forth and conquer. Frankly, no one book will do that in any discipline. Also, while Whittaker’s attacks are a great model, they will not cover 100% of your testing, so relying on them too greatly and not looking at other aspects of testing will leave the tester with “blind spots” that will still need to be overcome. What Whittaker has done with How to Break Software is give the reader some food for thought, and to consider how to use the attacks described to broaden the way they think about testing.
Today's book review is a bit retro. James Whittaker has written a trio of “How to Break” books. This one is the first, published in 2003, and for me was a great shot in the arm to look at some different approaches to testing. First and foremost, this is not a theoretical book. It’s a practical book filled with “how to apply this stuff in the real world”. Whittaker designed the book around the concept of “waging war” on software, and the book has sections that describe “attacks” that can be applied to a software application. Why attacks? Because Whittaker felt it would make testing more fun... and he's right! IMO, the attack metaphor does make testing more fun. If you listen to any software quality podcasts and you hear the term “Whittaker Attacks”, this is the book that contains and describes them. So does James Whittaker offer a “battle plan” that’s all that? Let’s take a look.
In How to Break Software, James makes the case for creating a “Fault Model”. This way, we can determine what approach we want to take when testing an application. The idea for the fault model comes from the way that the user interacts with an application, and the way that a system interacts with an application. The basic idea is that:
- A human user calls the application.
- The application requests memory and resources from the kernel.
- The application establishes connections to things like databases, libraries, etc.
- The application opens and closes files on the system and accesses peripheral devices.
Once the user understands where they interact with the system and how to interact with it, they can then “go and explore”. The approach to testing recommended is to “wage war” and “attack” the software.
Each attack mentioned in the book is presented and structured the same way. The attack is first named, and then Whittaker says when to apply the attack, what software fault model makes the attack successful, how to determine of the attack exposes failures, and how to actually conduct the attack. Each example shows a real world application under test and how the bug was triggered. This takes the idea of testing software out of the theory books and gives users a direct hands-on method to trying it out for themselves.
Talking about each of the attacks would take up way more room that a review would realistically allow, but I have listed them below so that you can think about them and how you might use them in your own tests.
User Interface Attacks
Attack 1: Apply inputs that force all error messages to appear.
Attack 2: Apply inputs that force the software to establish default values.
Attack 3: Explore allowable character sets and data types.
Attack 4: Overflow input buffers.
Attack 5: Find inputs that may interact and test combinations of their values.
Attack 6: Repeat the same input or series of inputs numerous times.
Attack 7: Force different outputs to be generated for each input.
Attack 8: Force invalid outputs to be generated.
Attack 9: Force properties of an output to change.
Attack 10: Force the screen to refresh.
Attack 11: Apply inputs using a variety of initial conditions.
Attack 12: Force a data structure to store too many or too few values.
Attack 13: Investigate alternative ways to modify internal data constraints.
Attack 14: Experiment with invalid operand and operator combinations.
Attack 15: Force a function to call itself recursively.
Attack 16: Force computation results to be too large or too small.
Attack 17: Find features that share data or interact poorly.
System Interface Attacks
Attack 1: Fill the file system to its capacity.
Attack 2: Force the media to be busy or unavailable.
Attack 3: Damage the media.
Attack 4: Assign an invalid file name.
Attack 5: Vary file access permissions.
Attack 6: Vary or corrupt file contents.
In addition to talking about the techniques, there are also some tools included with the book on the CD that comes with it (Canned Heat and Holodeck). Many of the examples in the book (especially the system interface attacks) use these tools to conduct them. However, the ideas behind the attacks can be transferred to other tools. Also, while many of the examples displayed are for MS Windows software, and the tools are Windows based, again, don’t focus too much on the actual tools, and use the attacks as a metaphor and ideas to test with.
The biggest value with these attacks is that it gives a fairly simple framework for testers to use when looking at any program or application. Programs accept input, display output, interact with the file system, and access system resources and peripherals. Knowing just these details and nothing else about the program gives the tester plenty of areas to explore.
Bottom Line:
This is a great introduction to some practical testing techniques, and can help build skills and ideas for testers both old and new. The information is basic and is presented in a way to be accessible to both beginners and veteran testers. The new testers will appreciate the quick “get into it and get effective quickly” aspect of the book. Seasoned testers will appreciate the methodology and a few new tricks they might not have considered.
What needs to be made clear is that this is not a book that will give the reader all they need to go forth and conquer. Frankly, no one book will do that in any discipline. Also, while Whittaker’s attacks are a great model, they will not cover 100% of your testing, so relying on them too greatly and not looking at other aspects of testing will leave the tester with “blind spots” that will still need to be overcome. What Whittaker has done with How to Break Software is give the reader some food for thought, and to consider how to use the attacks described to broaden the way they think about testing.
Tuesday, May 11, 2010
The Devil is in the Details
Last week, I told you all about my reasons for why I refer to this blog as “The Mis-Education and Re-Education of a Software Tester”. While it’s a real reflection of my frustration and vow to do something about it, the blog would be somewhat less than useful if I didn’t tell you specifically what I was doing about it. Last week, I started my first of what I hope will be many interactions with the Association for Software Testing (AST). I started and went through the first week of the “Black Box Software Testing: Foundations” course.
This is the course that many of you may have seen on the internet, with materials were designed by Cem Kaner and James Bach, as well as others. What’s really cool about what AST does is that, every few weeks, they actually construct a group of people to teach this class and have students enroll in it. From there, it’s a full University level course on software testing, with an emphasis on black box testing. Cem provides the lecture materials and has recorded the lectures, and in addition, he also comments on the course as well. This is a little geeky, and sure, I'll own that, but actually participating in a course with the guy that many call the “Godfather of Testing”... yeah, that’s kinda’ cool :)!!!
So what does this have to do with the title of my post? Well, it’s been an interesting week and a half, to say the least. While I was prepared to say “I know there’s a lot I don’t know”, I didn’t realize just how much that really was. One of the skills that a software tester needs to have is the ability to learn how to avoid “traps”. Often, we set up expectations and we decide what we are going to do based on what our interpretation of what requirements are, what the requirements are meant to do, and that, if we follow them, we will do good testing. What I discovered on the last couple of “quizzes” is that one of the most important skills testers can develop is that of being a very critical reader. How do I know this? Because I floundered spectacularly on two quizzes that I thought I had nailed. Why did I do so poorly? Because I missed key details in what I was reading.
Without giving too much away (and as a way of encouraging others to participate in this class if you get a chance to; membership in AST is $85 a year and gives the opportunity to participate in these types of learning opportunities for free or a greatly reduced price depending on the class), the multiple choice tests are worded in a different manner than how most people are used to dealing with. In most multiple choice questions, you can read them, eliminate obvious duds, guess at the rest and, chances are, you will do well on the quiz or exam. Not so here. The questions are worded in such a way that there is potentially multiple right or wrong answers, and so you then have to choose from, in some cases, up to ten different choices, only one of which is correct, but many of the numbered items are combinations of the previous answer choices. This means that “guesswork” will not help you here. You have to really go through each question and tease out exactly what they say, what they might imply, and then look at every single answer and determine if it’s just one of the choices, or a combination of the choices. I shall confess that this threw me for a loop.
To even more fully emphasize this, one of the questions had us discussing a function where a value was stored in a program, it was converted to a numeric type, and then a second action manipulated the number. We were asked to give a detailed breakdown of what we would do to test the scenario, which I did. It was only after I read the other class members answers that I realized “uh oh, I may have read the requirements wrong”. Sure enough, after everyone had a chance to answer, the instructors offered clarification that verified, yep, I missed something, and that something was what could have led my testing down a totally different direction.
The instructors then made clear that this exercise, and the resulting confusion, was by design. We as testers often fall into this trap (and I was not the only one that fell into it). We read a requirement, we think we understand fully what it is saying, and then we go off and test it. Upon further review, we realize what we were testing doesn’t seem to be doing what we think it should be doing. After much hand-wringing and possible confusion, we go and talk to the developers about what we are seeing, and what we have done, only to realize that, whoops, we misread the requirements, or the requirements were vague, so we made a decision based on what we perceived the need to be, rather than what the need actually is.
This means that the onus is on us testers to make sure that we have enough clarity about what we are doing early enough in the process so that we don’t start off down paths that will be dead ends, or worse, long roads that take us far away from our goals. To be fair, even with these detours, we can find out information that it vital and relevant to the quality of a product, but really, it would be much better to find out first what we really need to focus on so that we know, for sure, what the stakeholders really want to have us doing, and what they expect the product to do.
After that, hey, road trips are great :.
This is the course that many of you may have seen on the internet, with materials were designed by Cem Kaner and James Bach, as well as others. What’s really cool about what AST does is that, every few weeks, they actually construct a group of people to teach this class and have students enroll in it. From there, it’s a full University level course on software testing, with an emphasis on black box testing. Cem provides the lecture materials and has recorded the lectures, and in addition, he also comments on the course as well. This is a little geeky, and sure, I'll own that, but actually participating in a course with the guy that many call the “Godfather of Testing”... yeah, that’s kinda’ cool :)!!!
So what does this have to do with the title of my post? Well, it’s been an interesting week and a half, to say the least. While I was prepared to say “I know there’s a lot I don’t know”, I didn’t realize just how much that really was. One of the skills that a software tester needs to have is the ability to learn how to avoid “traps”. Often, we set up expectations and we decide what we are going to do based on what our interpretation of what requirements are, what the requirements are meant to do, and that, if we follow them, we will do good testing. What I discovered on the last couple of “quizzes” is that one of the most important skills testers can develop is that of being a very critical reader. How do I know this? Because I floundered spectacularly on two quizzes that I thought I had nailed. Why did I do so poorly? Because I missed key details in what I was reading.
Without giving too much away (and as a way of encouraging others to participate in this class if you get a chance to; membership in AST is $85 a year and gives the opportunity to participate in these types of learning opportunities for free or a greatly reduced price depending on the class), the multiple choice tests are worded in a different manner than how most people are used to dealing with. In most multiple choice questions, you can read them, eliminate obvious duds, guess at the rest and, chances are, you will do well on the quiz or exam. Not so here. The questions are worded in such a way that there is potentially multiple right or wrong answers, and so you then have to choose from, in some cases, up to ten different choices, only one of which is correct, but many of the numbered items are combinations of the previous answer choices. This means that “guesswork” will not help you here. You have to really go through each question and tease out exactly what they say, what they might imply, and then look at every single answer and determine if it’s just one of the choices, or a combination of the choices. I shall confess that this threw me for a loop.
To even more fully emphasize this, one of the questions had us discussing a function where a value was stored in a program, it was converted to a numeric type, and then a second action manipulated the number. We were asked to give a detailed breakdown of what we would do to test the scenario, which I did. It was only after I read the other class members answers that I realized “uh oh, I may have read the requirements wrong”. Sure enough, after everyone had a chance to answer, the instructors offered clarification that verified, yep, I missed something, and that something was what could have led my testing down a totally different direction.
The instructors then made clear that this exercise, and the resulting confusion, was by design. We as testers often fall into this trap (and I was not the only one that fell into it). We read a requirement, we think we understand fully what it is saying, and then we go off and test it. Upon further review, we realize what we were testing doesn’t seem to be doing what we think it should be doing. After much hand-wringing and possible confusion, we go and talk to the developers about what we are seeing, and what we have done, only to realize that, whoops, we misread the requirements, or the requirements were vague, so we made a decision based on what we perceived the need to be, rather than what the need actually is.
This means that the onus is on us testers to make sure that we have enough clarity about what we are doing early enough in the process so that we don’t start off down paths that will be dead ends, or worse, long roads that take us far away from our goals. To be fair, even with these detours, we can find out information that it vital and relevant to the quality of a product, but really, it would be much better to find out first what we really need to focus on so that we know, for sure, what the stakeholders really want to have us doing, and what they expect the product to do.
After that, hey, road trips are great :.
Friday, May 7, 2010
The Mis-Education and Re-Education of a Software Tester
Some of you may have noticed by now that today's title is the sub-text for TESTHEAD. I had a conversation recently about that very sub-text. One of the comments was that ”[…] the name of your blog ‘The Mis-Education and Re-Education of a Software Tester’ sounds rather negative”. I hadn’t considered that might be the interpretation, so today I’m going to explain exactly what I actually mean with that sub-text.
For starters, it helps to understand what got me into testing in the first place. I didn’t go to school to learn how to program computers or to test software. As a matter of fact, I went to college my first time around as a way to do something while I waited for my “real career” to take off. From 1985 until 1992, I spent a lot of my time, talent and energy trying to make a go as a musician in the San Francisco music scene. That was my passion, my reason for living at that point in my life. It was a lot of fun, and I had a great time doing it, but it never really made me a living.
In 1991, I had an opportunity to work with a temporary agency, and my first job sent me to Cisco Systems down in Menlo Park (back when they had 300 employees). While I was there I worked with the Engineering and Manufacturing groups, and that temp job turned into a full time job where I administered the Engineering test labs. Because of my time in the test labs and setting up machines and keeping a large lab environment up and running, I came to the attention of the test group, who thought I might make a good tester. So there it is, my meandering route from musician to tester.
Once I was in the testing group, I learned what I could from other testers and developers. There was no formal learning process or training program for testers. I took some classes on UNIX Shell scripting and on how to program in C and C++, but I never took any classes about how to test or what worked best in a test environment. Along with the rest of the test team, I was given a copy of John Ousterhaut’s book “Tcl and the Tk Toolkit” and told “learn this, as our automation tests are all based on this”. During my entire time at Cisco, I owned exactly two books on testing; “Testing Computer Software” by Kaner, Falk and Nguyen, and later on, “Software Test Automation” by Mark Fewster and Dorothy Graham.
Over the years, I have mostly functioned in what I call “firefight mode”. This has been an approach to “learn what I need to so I can get done what I have to get done”. Some might call it a “Street smart” method of learning, where experience teaches what to keep and what to discard. The problem with this approach, in hindsight, is that, while it’s a great practical way of dealing with issues, and can be lean and mean some of the time, it doesn’t afford a very critical view of what or how the process develops over time. Additionally, to borrow a bit from the “street smarts” attitude, the vernacular rarely rises above the level of the street. This problem becomes even more of an issue when the tester in question is more of as lone resource or is focused in a specific area, which has been my reality for much of my career. In these cases, I often used whatever mechanism was in place for creating documentation, writing issue reports, and testing systems. To borrow from Steven Covey, I was so focused on making sure that we were climbing the ladder, I rarely had time of inclination to ask myself “is the ladder leaning up against the correct wall?”
It’s only been the last few years that I've decided to take a broader view, and to really determine if that is indeed the case. To this end, I made a commitment that it was time to learn what I really knew, and discover what I really didn’t. This meant doing some vocabulary checks, and expanding my vocabulary. This meant reaching outside of my little cocoon and finding out what other testers were doing. It meant opening myself up to the fact that I might find that I was sorely lacking in key areas (something I always knew deep down, but it was less threatening when it was a fuzzy and amorphous lacking). It meant facing up to the fact that there were aspects to testing I just didn’t know anything about, and also to realize that these aspects were thing I just might not be any good at, at least not at that time.
This is the re-education part that I have been talking about, a way to get a better handle on areas that I knew about but couldn’t explain or quantify well, and delve into areas that I knew nothing or very little about. To that end, I made a commitment to read every testing book I could find. Some are great, some are lacking, and some take different views of concepts and are passionately argued by proponents and opponents. Taking the time to sift through these areas and apply them ourselves is critical.
We only learn what we actually apply, and then what we can effectively communicate to others. I decided that I didn’t want to continue on with half truths and lore that got handed down and were only partially applied due to necessity. That doesn’t mean my goal is to become some uber-academic that masters all nuance of “test speak”. Quite the opposite, I want to see what I can do to better understand the discipline that is testing and determine how to carry those ideas forward, and better apply the techniques. My ultimate goal is to become better at testing and to learn better ways of applying those skills.
For starters, it helps to understand what got me into testing in the first place. I didn’t go to school to learn how to program computers or to test software. As a matter of fact, I went to college my first time around as a way to do something while I waited for my “real career” to take off. From 1985 until 1992, I spent a lot of my time, talent and energy trying to make a go as a musician in the San Francisco music scene. That was my passion, my reason for living at that point in my life. It was a lot of fun, and I had a great time doing it, but it never really made me a living.
In 1991, I had an opportunity to work with a temporary agency, and my first job sent me to Cisco Systems down in Menlo Park (back when they had 300 employees). While I was there I worked with the Engineering and Manufacturing groups, and that temp job turned into a full time job where I administered the Engineering test labs. Because of my time in the test labs and setting up machines and keeping a large lab environment up and running, I came to the attention of the test group, who thought I might make a good tester. So there it is, my meandering route from musician to tester.
Once I was in the testing group, I learned what I could from other testers and developers. There was no formal learning process or training program for testers. I took some classes on UNIX Shell scripting and on how to program in C and C++, but I never took any classes about how to test or what worked best in a test environment. Along with the rest of the test team, I was given a copy of John Ousterhaut’s book “Tcl and the Tk Toolkit” and told “learn this, as our automation tests are all based on this”. During my entire time at Cisco, I owned exactly two books on testing; “Testing Computer Software” by Kaner, Falk and Nguyen, and later on, “Software Test Automation” by Mark Fewster and Dorothy Graham.
Over the years, I have mostly functioned in what I call “firefight mode”. This has been an approach to “learn what I need to so I can get done what I have to get done”. Some might call it a “Street smart” method of learning, where experience teaches what to keep and what to discard. The problem with this approach, in hindsight, is that, while it’s a great practical way of dealing with issues, and can be lean and mean some of the time, it doesn’t afford a very critical view of what or how the process develops over time. Additionally, to borrow a bit from the “street smarts” attitude, the vernacular rarely rises above the level of the street. This problem becomes even more of an issue when the tester in question is more of as lone resource or is focused in a specific area, which has been my reality for much of my career. In these cases, I often used whatever mechanism was in place for creating documentation, writing issue reports, and testing systems. To borrow from Steven Covey, I was so focused on making sure that we were climbing the ladder, I rarely had time of inclination to ask myself “is the ladder leaning up against the correct wall?”
It’s only been the last few years that I've decided to take a broader view, and to really determine if that is indeed the case. To this end, I made a commitment that it was time to learn what I really knew, and discover what I really didn’t. This meant doing some vocabulary checks, and expanding my vocabulary. This meant reaching outside of my little cocoon and finding out what other testers were doing. It meant opening myself up to the fact that I might find that I was sorely lacking in key areas (something I always knew deep down, but it was less threatening when it was a fuzzy and amorphous lacking). It meant facing up to the fact that there were aspects to testing I just didn’t know anything about, and also to realize that these aspects were thing I just might not be any good at, at least not at that time.
This is the re-education part that I have been talking about, a way to get a better handle on areas that I knew about but couldn’t explain or quantify well, and delve into areas that I knew nothing or very little about. To that end, I made a commitment to read every testing book I could find. Some are great, some are lacking, and some take different views of concepts and are passionately argued by proponents and opponents. Taking the time to sift through these areas and apply them ourselves is critical.
We only learn what we actually apply, and then what we can effectively communicate to others. I decided that I didn’t want to continue on with half truths and lore that got handed down and were only partially applied due to necessity. That doesn’t mean my goal is to become some uber-academic that masters all nuance of “test speak”. Quite the opposite, I want to see what I can do to better understand the discipline that is testing and determine how to carry those ideas forward, and better apply the techniques. My ultimate goal is to become better at testing and to learn better ways of applying those skills.
Wednesday, May 5, 2010
Wednesday Book Review: Secrets of A Buccaneer Scholar
Through the years, I have come across a number of books that I have used and valued. These may be new books, or they may be older ones. Each Wednesday, I will review a book that I personally feel would be worthwhile to testers.
Since I’ve referenced this book a bunch of times in various posts, I figure it is only worthwhile that I include it as an entry to the Wednesday book review list. Again, this is not a testing book per se, but it is written by one of the most visible testers in the field (James Bach) and while the point of the book is unconventional education and developing a passion for lifelong learning, the fact that his career is that of a software tester makes much of what Bach says very relevant to any tester looking to expand their knowledge and understanding beyond the standard academic realms.
First: a disclaimer; some people love James Bach, some people cannot stand James Bach, and many people fall somewhere in between. I happen to be a fan of his blunt and in-your face approach, and find the way that he writes to be both refreshing and unnerving (I happen to like Larry Winget for the exact same reasons). James does not filter. He says what he thinks and lets the chips fall where they may. He is especially blunt about his criticism of the current school system and the reasons why he dropped out of high school. If this is the message you get from the book (i.e. drop out of school, it worked for me) you will be greatly missing the point of this book.
Secrets of a Buccaneer Scholar tells James’ story of disillusionment with the school system and how it set him on the path to want to walk away from it. Much of the book shows that James’ concepts of Buccaneering developed over time; the way that the loosely associated bands of brigands and the precursors to the pirates of fiction were, in many ways, a societal model with a lot of parallels to the way we live and learn today. I found this idea fascinating. Here are some more of the ideas that form the core of the Buccaneer Scholar ideals (for all of them, I recommend picking up the book; these are the ones that resonated with me):
View your education as lifelong; learn to educate yourself by scouting and using all of the resources available (books, web, podcasts, friends and colleagues).
Look for and work on "authentic problems". You will be much more likely to be engaged on trying to solve problems that matter to you rather than those that fill a textbook, but have no bearing on your own life or interests.
Find those things that interest you, that you find fun and enjoyable, and work with the way that you think. I wrote about this in one of my previous blog entries called “Training the Tiger to Test”.
Be willing to experiment, and try things that may or may not pan out, and be open to that notion that what you follow may lead to a dead end (but you still earn from that experience).
Explore a variety of methods of learning; don’t force yourself to try just one way (I will frequently read for or five books simultaneously on the same topic, just to see which one engages me better, and usually I find that different sections from different books work for me at different times).
James likes using heuristics and anagrams for describing these heuristics. The idea is that, by building models and frameworks to try out ideas, you can use a disciplined approach to solving problems or addressing areas with a completeness and focus.
In many ways, a passion for one thing will give you a framework for how to apply what you have learned to something else (as I have frequently said, Snowboarding and Scouting are common metaphors that I use in my day to day testing. My understanding and appreciation of the two disciplines have helped me frame situations and issues elsewhere).
Learning without doing something with it is oftentimes pointless. Yes, it can be fun and a nice diversion, and shouldn’t be discounted entirely, but we mostly learn by doing, so roll up the sleeves and experiment, even if the experiment proves disastrous.
Challenge the status quo. Be willing to look at things differently, and ask why something that is being taught is being taught. Question everything. Don’t let someone or some dogma stop you from trying to understand what is really happening.
Your reputation is what will drive your career more than any diploma or certification you hold (personally, I believe this to be true; the diploma will open some doors, but is not a guarantee of success. For many, after their first job, their diploma isn’t as valuable. Note, I have one, so I don’t see 100% eye to eye on this, but I know many engineers who do not have diplomas or degrees who have likewise done very well; their reputations are what carried them through).
Being a part of a community that gives to others, and giving much of what you have learned to others will help to develop the network and opportunities to build your personal brand, and give you a change to grow in your area of expertise by helping others do the same (I wonder if Bach reads Seth Godin, as this idea is almost a direct parallel with what Godin writes about in Linchpin).
Bottom Line:
James Bach has forged an incredible life, one that many of us would love to emulate. His life story is fascinating, and the challenges he has faced have brought him to where he is today and the philosophy of learning he embraces. Will this fit for everyone? No, it requires a tremendous amount of drive and self-discipline and work to do what Bach is advocating (and he freely admits the same). Should kids in school today use Secrets as an excuse to chuck school and follow his lead? Again, no, I wouldn’t make such a blanket statement. There is much value to a formal education and for many they can do very well in that environment, perhaps better than venturing out entirely on their own. What Bach advocates, the art of the Buccaneer Scholar, is that we all must take hold of our own educations, and work to do our best to make that approach a part of our everyday lives. Would I give Secrets to a kid in junior high or high school who is struggling with school? I might, but I would have to do so with heavy caveats (for some it may be the perfect advice, for others, it could prove disastrous). Would I recommend Secrets for those of us who want to remain lifelong learners and want to kick start our approach to getting in the groove to learn again? Absolutely.
Tuesday, May 4, 2010
Getting “Gazelle Intense” On a Goal
This is a different type of article, but one I’ve been thinking about lately. As I have been looking at professional goals, I have noticed that, when there are many things that need to be done, it’s difficult to get traction on every goal at the same time. The fact is that there are only so many hours in a day, and only so many ways you can carve out your time.
One of the things that has given me a different idea as to how to tackle some of my goals is to borrow something from the Personal Finance side of my life. Those who know me in my personal walk know that I am somewhat psychotically anti-debt; I don’t borrow money for any reason any longer, and have a goal to never borrow money again. This came from years in my past where I did really dumb things with money and racked up large debts, and later on ate through a fairly substantial nest egg because we were living larger than our means and we were not paying attention. Since 2007, however, we have lived entirely debt free in all areas (no consumer debt, no student loan debt, no mortgage debt, no business debt). One of the people that inspired me to make this stand and get to this point is Dave Ramsey (long time friends who read this blog are no doubt laughing right now, thinking to themselves “how many posts did it take for Michael to mention Dave Ramsey?!”).
In a nutshell, Dave Ramsey advocates getting together your debts, smallest to largest, paying the minimum on all but the smallest debt, and attacking that smallest debt with an absolute vengeance, throwing every penny you have to spare at that debt until it’s gone, and then attacking the next debt with the same fervor and vengeance. The idea is that the person paying the debt, gets larger and larger sums of money as each debt gets paid off to attack the next debt (Dave Ramsey calls this a "Debt Snowball"). In time, the entire debt is paid off, usually way faster than the person intended to, because they became “Gazelle Intense” with regard to getting out of debt. One of Dave’s most oft used metaphors is the idea of a Cheetah trying to bring down a Gazelle out on the Serengeti plain. The Cheetah has speed in a straight line, but a gazelle can duck and weave at high speed, too. If a Gazelle detects it will become dinner, you better believe it will run with every fiber of its being!
As I’ve been looking at this idea, it struck me that, with individual goals, it is also possible to become Gazelle Intense, but the challenge with a goal is that, unless it is truly pressing and urgent, most people will not go to the extremes that they might go to if the roof was about to cave in on them or, to continue with the metaphor, a cheetah was bearing down on them. So how can a person use the idea of getting “Gazelle Intense” with a goal that’s not financial? My first recommendation would be to “get current” on whatever is necessary, and then, after doing that, list the technical goals that you would like to accomplish and spell them out in tasks that are arranged the smallest to the largest. Some tasks will be relatively easy (install and set up the environment for a new testing tool). Some goals are more difficult (become comfortable and familiar with C# Script), and some are even more challenging (design a framework that will allow me to fully test a Web Application through a protocol driven API). Trying to do those things at the same time might become frustrating and overwhelming, but cutting up each item into steps where the smallest step is first, and the bigger steps follow on afterwards, it is possible to get traction on items and really get some forward movement. Intensity is the key in Dave Ramsey’s plan, and intensity is the key here. Another phrase that Ramsey uses is “live like no one else so that you can live like no one else”. Fans of Ramsey know what that means; it’s a call for “beans and rice, go for the jugular, live as inexpensively as possible, sell anything that is not essential and get so good at it that the dogs new name is eBay and the kids think they might be next”. I’m kidding about the dog and the kids of course, but the rest is totally serious. Go after the goal with every fiber of your being, but do so by attacking the smallest elements of the goal first, and then ratchet up the intensity with each step towards the total goal. How to do that is totally up to you, and whatever the goal is, only you will be able to determine what the smallest step to the largest steps may be.
We may not have the luxury of shutting everything off so that we can focus on a particular goal, but if we take the idea of “Gazelle Intensity” to at least one of them, I will promise that we all will be able to accomplish way more than we think that we can, and we will do it way faster than we think we can. So to my fellow testers, I recommend taking this idea from Ramsey, pick a goal, shape it out, and then go after it as though a cheetah was one your tail.
One of the things that has given me a different idea as to how to tackle some of my goals is to borrow something from the Personal Finance side of my life. Those who know me in my personal walk know that I am somewhat psychotically anti-debt; I don’t borrow money for any reason any longer, and have a goal to never borrow money again. This came from years in my past where I did really dumb things with money and racked up large debts, and later on ate through a fairly substantial nest egg because we were living larger than our means and we were not paying attention. Since 2007, however, we have lived entirely debt free in all areas (no consumer debt, no student loan debt, no mortgage debt, no business debt). One of the people that inspired me to make this stand and get to this point is Dave Ramsey (long time friends who read this blog are no doubt laughing right now, thinking to themselves “how many posts did it take for Michael to mention Dave Ramsey?!”).
In a nutshell, Dave Ramsey advocates getting together your debts, smallest to largest, paying the minimum on all but the smallest debt, and attacking that smallest debt with an absolute vengeance, throwing every penny you have to spare at that debt until it’s gone, and then attacking the next debt with the same fervor and vengeance. The idea is that the person paying the debt, gets larger and larger sums of money as each debt gets paid off to attack the next debt (Dave Ramsey calls this a "Debt Snowball"). In time, the entire debt is paid off, usually way faster than the person intended to, because they became “Gazelle Intense” with regard to getting out of debt. One of Dave’s most oft used metaphors is the idea of a Cheetah trying to bring down a Gazelle out on the Serengeti plain. The Cheetah has speed in a straight line, but a gazelle can duck and weave at high speed, too. If a Gazelle detects it will become dinner, you better believe it will run with every fiber of its being!
As I’ve been looking at this idea, it struck me that, with individual goals, it is also possible to become Gazelle Intense, but the challenge with a goal is that, unless it is truly pressing and urgent, most people will not go to the extremes that they might go to if the roof was about to cave in on them or, to continue with the metaphor, a cheetah was bearing down on them. So how can a person use the idea of getting “Gazelle Intense” with a goal that’s not financial? My first recommendation would be to “get current” on whatever is necessary, and then, after doing that, list the technical goals that you would like to accomplish and spell them out in tasks that are arranged the smallest to the largest. Some tasks will be relatively easy (install and set up the environment for a new testing tool). Some goals are more difficult (become comfortable and familiar with C# Script), and some are even more challenging (design a framework that will allow me to fully test a Web Application through a protocol driven API). Trying to do those things at the same time might become frustrating and overwhelming, but cutting up each item into steps where the smallest step is first, and the bigger steps follow on afterwards, it is possible to get traction on items and really get some forward movement. Intensity is the key in Dave Ramsey’s plan, and intensity is the key here. Another phrase that Ramsey uses is “live like no one else so that you can live like no one else”. Fans of Ramsey know what that means; it’s a call for “beans and rice, go for the jugular, live as inexpensively as possible, sell anything that is not essential and get so good at it that the dogs new name is eBay and the kids think they might be next”. I’m kidding about the dog and the kids of course, but the rest is totally serious. Go after the goal with every fiber of your being, but do so by attacking the smallest elements of the goal first, and then ratchet up the intensity with each step towards the total goal. How to do that is totally up to you, and whatever the goal is, only you will be able to determine what the smallest step to the largest steps may be.
We may not have the luxury of shutting everything off so that we can focus on a particular goal, but if we take the idea of “Gazelle Intensity” to at least one of them, I will promise that we all will be able to accomplish way more than we think that we can, and we will do it way faster than we think we can. So to my fellow testers, I recommend taking this idea from Ramsey, pick a goal, shape it out, and then go after it as though a cheetah was one your tail.
Monday, May 3, 2010
Listening to Learn
Many of us have several periods of time where we do not have to have our focus 100% on the task that we are doing. We spend time in commutes (either driving, biking, walking or riding public transportation), taking breaks of lunch times, working out in the gym, doing work in the yard, or just sitting down looking to get a few minutes of relative quiet time. These are all moments where a bit of learning can be done, where people can “tune in, turn on, and learn up”, to bastardize the Timothy Leary quote (and the first tester that asks me “who is Timothy Leary?” deserves a slap (LOL!) ).
The personal portable media player, and with it the development of the podcast, has changed the way that people can take the opportunity to learn and, dare I say it, even be entertained in the process. I will be the first to confess that I am not an avid user of the Apple iPod (my kids, on the other hand, love them, so we certainly have our share of them in our house) but I have preferred to use devices that act more like a portable hard drive. For many years, my preferred player of choice was an iRiver T30 that I hung around my neck every day for more than four years. It finally blew up in a blaze of glory earlier this year. Since then I have replaced it with a 16GB Creative Zen (the two selling points were its ability to play video and the fact that it had an SD expansion slot so that I could easily add additional media to listen to and view, even if the device was full.
No, my point of this article is not to extol the virtues of one player or another, but to let testers out there know that, hey, there are *lots* of podcasts associated with Software Quality and Software Testing. Some of these are very professionally done, and some have a “guerilla desktop” feel to them. Don’t judge a book by its cover, many people say, and at the same time, don’t judge a podcast by its production values. Some slick podcasts are bereft of useful information, and some lo-fi podcasts are huge in the amount of information they share and what the listener can learn from them.
So what are some of my favorites? Here’s a brief list:
Randy Rice’s Software Quality Podcast: This is the first of the podcasts I discovered, and I’ve found myself returning to them time and time again. Randy recorded 18 podcasts between 2006 and 2009, and they vary from high quality interview shows to transcripts of call-in chat sessions. The quality of the shows production varies, but the information provided is fantastic. Randy is one of my favorite podcasters, in the sense that, in addition to being a tester who understands how to communicate the challenges of testing, he also has a style of delivery that is engaging and fun to listen to. Randy, it’s been awhile, would you consider doing more podcasts, please :)?
Rex Black Consulting Services Podcast: Rex does a monthly call on various testing topics, and wow, does he go in depth on whatever topic he covers. The production is the same on every one of his podcasts, which is to say it’s raw, live and not very produced (no background music, no production breaks, etc.) but the information you get is fantastic and well worth the 60-90 minutes each episode represents. Rex is actively posting podcasts of his presentations usually one month after he initially makes them.
Scott Hanselman’s Hanselminutes: This is one of the most active of the technical podcasts that I listen to, and oftentimes focuses on testing topics. Scott is a .NET developer, and most of the time, his topics cover development topics like .NET languages, ASP.NET, and other interesting and off-beat topics (such as talking about the craft of podcasting with Joel Spolsky or the Science of Fitness with John Lam). There are currently 212 different podcasts archived on the Hanselminutes site, so odds are you will be able to find a lot of thing of interest to listen to.
43 Folders: This is actually not a testing podcast, but it’s one that I love listening to and I find it incredibly motivating. Merlin Mann hosts, and is, quite frankly, one of the most interesting and entertaining podcasters out there. This is the podcast associated with 43folders.com, which is a productivity site that was developed in association with David Allen’s book “Getting Things Done” (and David Allen has his own podcast, but I personally find Merlin way more entertaining to listen to, no disrespect to David whatsoever :) ). Merlin has a number of talks that he has recorded that help people get focused on what they really need to do, and give the shot in the arm motivation that someone like me needs from time to time, and he does it with an engaging and entertaining style. Three of his podcasts that I can highly recommend are his “Inbox Zero” Google tech talk, “149 Surprising Ways to Turbocharge Your Blog With Credibility” (this title is slightly misleading, but it’s a great talk nonetheless), and “Time and Attention” (a talk Merlin gave at Rutgers University).
Software Testing Podcast: This isn’t a podcast unto itself; it’s actually an aggregator of testing podcasts. Randy Rice’s podcasts are included in this list, as are podcasts from a number of other sites. I mention this primarily because some gems I might never have come cross I found in this listing. One of my favorite discoveries has to be Georgia Motoc’s Software Quality Podcast. I like her site because it focuses on some of the unique challenges regarding bilingual testing in Canada (why would that interest me? Because I have had both entertaining and frustrating issues with software that originated in Japan and having it localized for the U.S. market, so hearing her perspectives gives me insights should I need to face something like that again). The Gray Matters and Stick Minds Sound Byte podcasts with emphasis on testing are also listed here, and the Watir podcast in its entirety is also available here. James and Jonathan Bach did a couple of quick podcasts a while back, discussing how they look at questions and other issues related to testing. They are here as well.
Now, a quick note about these podcasts… For those hoping to hit the gold rush and find podcasts that will fill in the blanks and make you “Super Tester”… well, that’s probably not going to happen. There isn’t a simple “download and listen your way to testing prowess in 10 easy steps” (and if that can actually be done somehow, I may take a crack at it myself :) ). What you will get are some wonderful perspectives, some great advice, and some tips and tricks to help you re-consider what you are doing, and help you do some things better, and give you some exposure to some new or different ideas. Not every one of them will prove to be of interest to everyone, but even if one topic spurs an interest and a desire to learn more or follow different avenues, then it will be time well spent.
The personal portable media player, and with it the development of the podcast, has changed the way that people can take the opportunity to learn and, dare I say it, even be entertained in the process. I will be the first to confess that I am not an avid user of the Apple iPod (my kids, on the other hand, love them, so we certainly have our share of them in our house) but I have preferred to use devices that act more like a portable hard drive. For many years, my preferred player of choice was an iRiver T30 that I hung around my neck every day for more than four years. It finally blew up in a blaze of glory earlier this year. Since then I have replaced it with a 16GB Creative Zen (the two selling points were its ability to play video and the fact that it had an SD expansion slot so that I could easily add additional media to listen to and view, even if the device was full.
No, my point of this article is not to extol the virtues of one player or another, but to let testers out there know that, hey, there are *lots* of podcasts associated with Software Quality and Software Testing. Some of these are very professionally done, and some have a “guerilla desktop” feel to them. Don’t judge a book by its cover, many people say, and at the same time, don’t judge a podcast by its production values. Some slick podcasts are bereft of useful information, and some lo-fi podcasts are huge in the amount of information they share and what the listener can learn from them.
So what are some of my favorites? Here’s a brief list:
Randy Rice’s Software Quality Podcast: This is the first of the podcasts I discovered, and I’ve found myself returning to them time and time again. Randy recorded 18 podcasts between 2006 and 2009, and they vary from high quality interview shows to transcripts of call-in chat sessions. The quality of the shows production varies, but the information provided is fantastic. Randy is one of my favorite podcasters, in the sense that, in addition to being a tester who understands how to communicate the challenges of testing, he also has a style of delivery that is engaging and fun to listen to. Randy, it’s been awhile, would you consider doing more podcasts, please :)?
Rex Black Consulting Services Podcast: Rex does a monthly call on various testing topics, and wow, does he go in depth on whatever topic he covers. The production is the same on every one of his podcasts, which is to say it’s raw, live and not very produced (no background music, no production breaks, etc.) but the information you get is fantastic and well worth the 60-90 minutes each episode represents. Rex is actively posting podcasts of his presentations usually one month after he initially makes them.
Scott Hanselman’s Hanselminutes: This is one of the most active of the technical podcasts that I listen to, and oftentimes focuses on testing topics. Scott is a .NET developer, and most of the time, his topics cover development topics like .NET languages, ASP.NET, and other interesting and off-beat topics (such as talking about the craft of podcasting with Joel Spolsky or the Science of Fitness with John Lam). There are currently 212 different podcasts archived on the Hanselminutes site, so odds are you will be able to find a lot of thing of interest to listen to.
43 Folders: This is actually not a testing podcast, but it’s one that I love listening to and I find it incredibly motivating. Merlin Mann hosts, and is, quite frankly, one of the most interesting and entertaining podcasters out there. This is the podcast associated with 43folders.com, which is a productivity site that was developed in association with David Allen’s book “Getting Things Done” (and David Allen has his own podcast, but I personally find Merlin way more entertaining to listen to, no disrespect to David whatsoever :) ). Merlin has a number of talks that he has recorded that help people get focused on what they really need to do, and give the shot in the arm motivation that someone like me needs from time to time, and he does it with an engaging and entertaining style. Three of his podcasts that I can highly recommend are his “Inbox Zero” Google tech talk, “149 Surprising Ways to Turbocharge Your Blog With Credibility” (this title is slightly misleading, but it’s a great talk nonetheless), and “Time and Attention” (a talk Merlin gave at Rutgers University).
Software Testing Podcast: This isn’t a podcast unto itself; it’s actually an aggregator of testing podcasts. Randy Rice’s podcasts are included in this list, as are podcasts from a number of other sites. I mention this primarily because some gems I might never have come cross I found in this listing. One of my favorite discoveries has to be Georgia Motoc’s Software Quality Podcast. I like her site because it focuses on some of the unique challenges regarding bilingual testing in Canada (why would that interest me? Because I have had both entertaining and frustrating issues with software that originated in Japan and having it localized for the U.S. market, so hearing her perspectives gives me insights should I need to face something like that again). The Gray Matters and Stick Minds Sound Byte podcasts with emphasis on testing are also listed here, and the Watir podcast in its entirety is also available here. James and Jonathan Bach did a couple of quick podcasts a while back, discussing how they look at questions and other issues related to testing. They are here as well.
Now, a quick note about these podcasts… For those hoping to hit the gold rush and find podcasts that will fill in the blanks and make you “Super Tester”… well, that’s probably not going to happen. There isn’t a simple “download and listen your way to testing prowess in 10 easy steps” (and if that can actually be done somehow, I may take a crack at it myself :) ). What you will get are some wonderful perspectives, some great advice, and some tips and tricks to help you re-consider what you are doing, and help you do some things better, and give you some exposure to some new or different ideas. Not every one of them will prove to be of interest to everyone, but even if one topic spurs an interest and a desire to learn more or follow different avenues, then it will be time well spent.
Subscribe to:
Posts (Atom)