Wednesday, October 16, 2024

What Are We Thinking — in the Age of AI? with Michael Bolton (a PNSQC Live Blog)

In November 2022, the release of ChatGPT 3 brought almost overnight the world of the Large Language Model (LLM) to prominence. With its uncanny ability to generate human-like text, it quickly led to lofty promises and predictions. The capabilities of AI seemed limitless—at least according to the hype.

In May 2024, GPT-4o further fueled excitement and skepticism. Some hailed it as the next leap toward an AI-driven utopia. Others, particularly those in the research and software development communities, took a more skeptical approach. The gap between magical claims and the real-world limitations of AI was becoming clearer. 

In his keynote, "What Are We Thinking — in the Age of AI?", Michael Bolton challenges us to reflect on the role of AI in our work, our businesses, and society at large. He invites us to critically assess not just the technology itself, but the hype surrounding it and the beliefs we hold about it.

From the moment ChatGPT 3 debuted, AI has seen a lot of immense fascination and speculation. On one hand, we’ve heard the promises of AI revolutionizing software development, streamlining workflows, and automating complex processes. On the other hand, there have been dire warnings about AI posing an existential threat to jobs, particularly in fields like software testing and development.

For those in the testing community, we may feel weirdly called out. AI tools that can generate code, write test cases, or even perform automated testing tasks raise a fundamental question: Will AI replace testers?

Michael’s being nuanced here. While AI is powerful, it is not infallible. Instead of replacing testers, AI presents an opportunity for testers to elevate their roles. AI may assist in certain tasks, but it cannot replace the critical thinking, problem-solving, and creativity that human testers bring to the table.

One of the most compelling points Bolton makes is that **testing isn’t just about tools and automation**—it’s about **mindset**. Those who fall prey to the hype of AI without thoroughly understanding its limitations risk being blindsided by its flaws. The early testing of models like GPT-3 and GPT-4o revealed significant issues, from **hallucinations** (where AI generates false information) to **biases** baked into the data the models were trained on.

Bolton highlights that while these problems were reported early on, they were often dismissed or ignored by the broader community in the rush to embrace AI’s potential. But as we’ve seen with the steady stream of problem reports that followed, these issues couldn’t be swept under the rug forever. The lesson? **Critical thinking and skepticism are essential in the age of AI**. Those who ask tough questions, test the claims, and remain grounded in reality will be far better equipped to navigate the future than those who blindly follow the hype.

We should consider our relationship with technology. As AI continues to advance, it’s easy to become seduced by the idea that technology can solve all of our problems. Michael instead encourages us to examine our beliefs about AI and technology in greater depth and breadth.

- Are we relying on AI to do work that should be done by humans?
- Are we putting too much trust in systems that are inherently flawed?
- Are we, in our rush to innovate, sacrificing quality and safety?

Critical thinking, and actually practicing/using it, is more relevant than ever. As we explore the possibilities AI offers, we must remain alert to the risks. This is not just about preventing bugs in software—it’s literally about safeguarding the future of technology and ensuring that we use AI in ways that are ethical, responsible, and aligned with human values. 

Ultimately, testers have a vital role in this new world of AI-driven development. Testers are not just there to check that software functions as expected, this is our time to step up and be the clarions we claim we are. We are the guardians of quality, the ones who ask “What if?”, and probe the system for hidden flaws. In the age of AI, we need to be and do this more than ever.

Michael posits that AI may assist with repetitive tasks, but it cannot match the *intuition, curiosity, and insight that human testers bring to the job. 

It’s still unclear what the AI future will hold. Will we find ourselves in an AI-enhanced world of efficiency and innovation? Will our optimism give way to a more cautious approach? We don't know, but to be sure, those who practice critical thinking, explore risks, and test systems rigorously will have a genuine advantage.

No comments: