You've probably all been through this before. A target date has been set. You've run a product through its paces. You've tested thoroughly, and had multiple members of your team look over something for sanity's sake. All is good. You know it. You feel it, let's push this and call it a day.
The next morning, you walk in, think to yourself "let's just do a couple little tweaks. That package that we need to update, seems like now would be a good time to do that". Update completed. Now that that's out of the way, I'll show our PM what's been deployed. Only now, instead of that clean, seamless environment we'd seen, verified, tested within an inch of our lives, now it's showing weird behavior, oddly placed elements, and all in the room are having that classic "WTF?" reaction.
This is truly one of the worst experiences to have, and all of us testers have had to deal with it at one time or another. So what causes this? As I've been discussing this with our intern, I described it as the idea of "two screens". For this to make sense, think of the classic "shoji screens" used in Japan to separate rooms, made with wooden frames and using rice paper as the wall material. In one screen there is a number of holes. In the second screen, there is also a number of holes but placed in a different pattern. Lay them on top of each other, and for the most part, you will not be able to tell that there is anything but a "solid wall" in front of you. Remove either of the two screen layers, and the holes become visible, and obviously so.
After a bit of poking around, we realized that, yep, updating a particular module that our site uses made for a change in how some of our code was being called. The issue was always there, but it had been masked by the behavior of the previous revision third party module. By updating that module, it "fixed" something, and by making that fix, it showed a flaw that had been there for quite some time, but only now, with this update, were we able to see it.
I mention this because, in our rapidly changing and intertwined web world that we live in today (never mind mobile, which adds its own interesting "screening effects"), the interaction of components, even at the most trivial levels, could open up a vast area of unexplored options. Is it practical to review every change when third party components are updated? Probably not, but if the areas you are currently testing make use of those items, even peripherally, make a point of reading about and understanding the updates being performed, even if they seem to be relatively trivial.
Much of the time, the component changes will have no effect at all on what you are developing and testing. Sometimes, though, a change results in removing a mask, and that change will make it possible to see things you didn't see before. Much better to make those discoveries while testing, rather than in front of your PM when you want to have something signed off. In short, pay attention to the updates that happen around you. You might find out that those solid walls are not so solid after all ;).
1 comment:
You are very true that we tester earlier or later in our career always face thins situation once or twice.
I have also gone through one such situation in which I was just testing one module that process data on excel file and I have tested all the functional aspect of this module and has also tester with almost 20 row entry and once we have just deployed this feature for beta release, We found that feature was breaking just after 10 row entry but data that kept was something like 500 row entry. but the same was tested with 20 row entry earlier.
Finally we come to find the root cause and that was not big but was enough for a bad day in office.
So i think we all would have such stories in our package of employment as tester.
Post a Comment