Tuesday, October 15, 2024

AI-Augmented Testing: How Generative AI and Prompt Engineering Turn Testers into Superheroes, Not Replace Them with Jonathon Wright’s (a PNSQC Live Blog)

Sad that Jonathon couldn't be here this year as I had a great time talking with him last year but since he was presenting remotely, I could still hear him talking on what is honestly the most fun title of the entire event (well played, Jonathon, well played ;) ).

It would certainly be neat if AI was able to enhance our testing prowess, helping us find bugs in the most unexpected places, and create comprehensive test cases that could cover every conceivable scenario (editors note: you all know how I feel about test cases but be that as it may, many places value and mandate them, so I don't begrudge this attitude at all).

Jonathon is calling for us to recognize and use "AI-augmented testing" where AI doesn't replace testers but instead amplifies their capabilities and creativity. Prompt engineering can elevate the role of testers from routine task-doers to strategic innovators. Rather than simply executing tests, testers become problem solvers, equipped with "AI companions" that help them work smarter, faster, and more creatively (I'm sorry but I'm getting a "Chobits" flashback with that pronouncement. If you don't get that, no worries. If you do get that, you're welcome/I'm sorry ;) (LOL!) ).

The whole goal of AI-augmented testing is to elevate the role of testers. Testers are often tasked with running manual or automated tests, getting bogged down in repetitive tasks that demand "attention to detail" but do not allow much creativity or strategic thinking. The goal of AI is to "automate the routine stuff" so we can "allowing testers to focus on more complex challenges" ("Stop me! Oh! Oh! Oh! Stop me... Stop me if you think that you've heard this one before!") No disrespect to Jonathon. whatsoever, it's just that this has been the promise for 30+ years (and no, I'm not going to start singing When In Rome to you, but if that earworm is in your head now.... mwa ha ha ha ha ;) ).

AI-augmented testing is supposed to enable testers to become strategic partners within development teams, contributing, not merely bug detection but actual problem-solving and quality improvement. With AI handling repetitive tasks, testers can shift their attention to more creative aspects of testing, such as designing unique test scenarios, exploring edge cases, and ensuring comprehensive coverage across diverse environments. This shift is meant to enhance the value that testers bring to the table and make their roles more dynamic and fulfilling. Again, this has been a promise for many years, maybe there's some headway here.

The point is that testers who want to harness the power of AI will need a roadmap for mastering AI-driven technologies. there are many of them out there and there is a plethora of options in a variety of implementations from LLMs to dedicated testing tools. No tester will ever master them all but even if you only have access to a LLM system like Chat GPT, there is a lot that can be done with Prompt Engineering and harnessing the output of these LLM systems. They are of course not perfect but they are getting better and better all the time. AI can process vast amounts of data, analyze patterns, and predict potential points of failure, but it still requires humans to interpret results, make informed decisions, and steer the testing process in the right direction. Testers who embrace AI-augmented testing will find themselves better equipped to tackle the challenges of modern software development. In short, AI will not take your job... but a tester who is well-versed in AI just might.

This brings us to Prompt engineering. This is the process of precise, well-designed prompts that can guide generative AI TO perform specific testing tasks. Mastering prompt engineering will allow testers to customize AI outputs to their exact needs, unlocking new dimensions of creativity in testing.

Ss What Can we Do With Prompt Engineering? We can use it to...

-  instruct AI to generate test cases for edge conditions
- simulate rare user behaviors
- explore vulnerabilities in ways that would be difficult or time-consuming to code manually.
- validating AI outputs so that we ensure that generated tests align with real-world needs and requirements.

Okay, so AI can act as a trusted companion—an ally helping testers do their jobs more effectively, without replacing the uniquely human elements of critical thinking and problem-solving. Wright’s presentation provides testers with actionable strategies to bring AI-augmented testing to life, from learning the nuances of prompt engineering to embracing the new role of testers as strategic thinkers within development teams. We can transform workflows so they are more productive, efficient, and engaging. 

I'll be frank, this sounds rosy and optimistic but wow, wouldn't it be nice? The cynic in me is a tad bit skeptical but anyone who knows me knows I'm an optimistic cynic. Even if this promise turns out to be a magnitude of two less than what is promised here... that's still pretty rad :).

No comments: