Ah yes, another date with the legendary SeƱor Performo :).
Leandro is always fun to hear present and I particularly liked the premise of his talk as I frequently find myself dealing with cognitive biases, both in the way of locating them when others use them but also to admonish myself when I do (and yes, I do fall prey to them from time to time).
I've been in the process of teaching a class for the past few months related to software test automation, specifically learning about how to use a tool like Playwright with an automated testing framework. To that end, we have a capstone project that runs for three weeks. As anyone involved in software development knows, three weeks is both a lot of time and no time at all. This is by design, as there is no way to do everything that's needed, and because of that, there are things that we need to focus on that will force us to make decisions that will not be optimal. This fits into the conversation that Leandro is having today. How do you improve and get better when you have so many pressures and so little time to do it all?
Leandro is always fun to hear present and I particularly liked the premise of his talk as I frequently find myself dealing with cognitive biases, both in the way of locating them when others use them but also to admonish myself when I do (and yes, I do fall prey to them from time to time).
I've been in the process of teaching a class for the past few months related to software test automation, specifically learning about how to use a tool like Playwright with an automated testing framework. To that end, we have a capstone project that runs for three weeks. As anyone involved in software development knows, three weeks is both a lot of time and no time at all. This is by design, as there is no way to do everything that's needed, and because of that, there are things that we need to focus on that will force us to make decisions that will not be optimal. This fits into the conversation that Leandro is having today. How do you improve and get better when you have so many pressures and so little time to do it all?
Note: I am not trying to throw shade at my students. I think they are doing a great job, especially in the limited time frame that they have (again, by design) and seeing what choices they make (as I'm literally a "disinterested shareholder" in this project, meaning I care about the end product but I'm trying my level best to not get involved or direct them as to what to do. In part, it's not the instructor's role to do that but also, I'm curious to see the what and the why concerning the choices that are made).
We often act irrationally under pressure and with time limitations. Often we are willing to settle for what works versus what is most important or helpful. I'm certainly guilty of that from time to time. An interesting aspect of this, and one I have seen, is the "man with a hammer" syndrome, where once we have something we feel works well, we start duplicating and working with it because we know we can have great wins with that. That's all well and good but at times, we can go overboard. Imagine that you have an application with navigation components. You may find that many of those components use similar elements, and with that, you can create a solution that will cover most of your navigation challenges. The good thing? We have comprehensive navigation coverage. The disadvantage? all of that work on Navigation, while important and necessary, has limited the work on other functionalities with the unit under test. Thus, it may be a better use of time to do some of the navigation aspects and get some coverage on other aspects of the application rather than have a comprehensive testing solution that covers every navigation parameter and little else to show for it.
Another example that Leandro gives is "Living Among Wolves" or we can consider this an example of "conformance bias" meaning that when we do certain things or we are part of a particular environment, we take on the thinking of those people to fit in with the group. Sometimes this is explicit, sometimes it is implicit, and sometimes we are as surprised as anyone else that we are doing something we are not even aware of.
Another example that Leandro gives is "Living Among Wolves" or we can consider this an example of "conformance bias" meaning that when we do certain things or we are part of a particular environment, we take on the thinking of those people to fit in with the group. Sometimes this is explicit, sometimes it is implicit, and sometimes we are as surprised as anyone else that we are doing something we are not even aware of.
The "sunk cost" appears in a lot of places. Often we will be so enamored with the fact that we have something working that we will keep running and working with that example as long as we can. We've already invested in it. We've put time into it, so it must be important. Is it? Or are we giving it an outsized importance merely because we've invested a lot of time into it?
One of the lessons I learned some time back is that any test that never fails is probably not very well designed or it offers little value in the long run. It's often a good idea to approach tests both from a positive and a negative perspective. It's one thing to get lucky and get something that works in a positive/happy path test (or not necessarily lucky but limited in what's being done. Now, does your logic hold up when you inver the testing idea? Meaning can you create a negative test or multiple negative tests that will "fail" based on changing or adding bogus data or multiple bogus data examples. Better yet, are you doing effective error handling with your bogus data? The point is, that so many of our tests are balanced to only happy path, limited depth tests. If you have a lot of positive tests and you don't have many tests that handle negative aspects (so that the incorrect outcome is expected... and therefore makes a test "pass" instead of fail), can you really say you have tested the environment effectively?
Ending with a Shameless plug for Leandro. Leandro is now an author, having written "The Hitchhikers Guide To Load Testing Projects", a fun walkthrough that will guide you through the phases or levels of an IT load testing project. https://amzn.to/37wqpyx
No comments:
Post a Comment