Tuesday, October 15, 2024

Vulnerabilities in Deep Learning Language Models (DLLMs) with Jon Cvetko (A PNSQC Live Blog)

Vulnerabilities in Deep Learning Language Models (DLLMs)

There's no question that AI has become a huge topic in the tech sphere in the past few years. It's prevalent in the talks that are being presented at PNSQC (it's even part of my talk tomorrow ;) ). The excitement is contagious, no doubt exciting but there's a bigger question we should be asking (and John Cvetko is addressing)... what vulnerabilities are we going to be dealing with, specifically in Deep Learning Language Model Platforms like ChatGPT?

TL;DR version: are there security risks? Yep! Specifically, we are looking at Generative Pre-trained Transformer (GPT) models. As these models evolve and expand their capabilities, they also widen the attack surface, creating new avenues for hackers and bad actors. It's one thing to know there are vulnerabilities, it's another to understand them and learn how to mitigate them.

Let's consider the overall life cycle of a DLLM. we start with our initial training phase, then move to deployment, and then monitor its ongoing use in production environments. DLLMs require vast amounts of data for training. What d we do when this data includes sensitive or proprietary information? If that data is compromised,  organizations can suffer significant privacy and security breaches.


John makes a point that federated training is growing when it comes to the development of deep learning models. Federated training means multiple entities will contribute data to train a single model. The benefit is that it can distribute learning and reduce the need for centralized data storage, it also introduces a new range of security challenges. Federated training increases the risk of data poisoning, where malicious actors intentionally introduce harmful data into the training set to manipulate the model’s generated content.

Federated training decentralizes the training process so that organizations can develop sophisticated AI models without sharing raw data. However, according to Cvetko, a decentralized approach also expands the attack surface. Distributed systems are nearly by design more vulnerable to tampering. Without proper controls, DLLMs can be compromised before they even reach production.

there is always a danger of adversarial attacks during training. Bad actors could introduce skewed or intentionally biased data to alter the behavior of the model. This can lead to unpredictable or dangerous outcomes when the model is deployed. These types of attacks can be difficult to detect because they occur early in the model’s life cycle, often before serious testing begins.

OK, so that's great... and unnerving. We can make problems for servers. So what can we do about it? 

Data Validation: Implement strict data validation processes to ensure that training data is clean, accurate, and free from malicious intent. By scrutinizing the data that enters the model, organizations can reduce the risk of data poisoning.

Model Auditing: Continuous monitoring and auditing of models during both training and deployment phases. This helps detect oddities in the model behavior early on, allowing for quicker fixes and updates.

Federated Learning Controls: Establish security controls around federated learning processes, such as encrypted communication between participants, strict access controls, and verification of data provenance.

Adversarial Testing: Conduct adversarial tests to identify how DLLMs respond to unexpected inputs or malicious data. These tests can help organizations understand the model’s weaknesses and prepare for potential exploitation.

There is a need today, for "Responsible AI development." DLLMs are immensely powerful and can carry significant risk potential if not properly secured. While this "new frontier" is fun and exciting, we have a bunch of new security challenges to deal with. AI innovation does not have to come at the expense of security. By understanding the life cycle of DLLMs and implementing the right countermeasures, we can leverage the power of AI while at the same time safeguarding our systems from evolving threats.

Mistakes I Made So You Don’t Have To: Lessons in Mentorship with Rachel Kibler (A PNSQC Live Blog)

I have known Rachel for several years so it was quite fun to sit in on this session and hear about struggles I recognized all to well. I have tried training testers over the years, some I've been successful with, others not so much. When a new tester comes along quickly, seems to get it, and digs testing, that's the ultimate feeling (well, *an* ultimate feeling). 

However, as Rachel points out, it’s also full of potential missteps, and as she said clearly at the beginning, "Believe me, I’ve made plenty!" This was a candid and honest reflection of what it takes to be a mentor and help others who are interested in becoming testers, as well as those who may not really want to become testers, but we mentor them anyway.

We can sum this whole session up really quickly with "Learning from our mistakes is what makes us better mentors—and better humans"... but what's the fun in that ;)?


Mistake 1: One-Size-Fits-All Training Doesn’t Work

There is no single, ideal method to teach testing that would work for everyone. Rachel had clear plans and expected to get consistent results. However, "people are not vending machines". You can’t just input the same words and expect identical outcomes. Each person learns differently, has different experiences, and responds to unique challenges.

Mistake 2: Setting the Wrong Challenges

It's possible to give team members tasks that are either too difficult or too easy, failing to gauge their current abilities. The result? Either they are overwhelmed and lost confidence, or they felt under-challenged and disengaged. Tailoring challenges to a trainee’s current skill level not only builds their confidence but also keeps them engaged and motivated. As mentors, our role is to provide enough support to help them succeed while still pushing them to grow.


Mistake 3: Don't Forget the Human Element

At the end of the day, we’re working with humans. Rachel’s talk highlights the importance of remembering that training isn’t just about passing on technical knowledge—it’s about building relationships.  Everyone has unique needs, emotions, and motivations. By focusing on the human element, we can create an environment where people feel supported and valued, making them more likely to succeed.

Mistake 4: Not Embracing Mistakes as Learning Opportunities

Mistakes are opportunities to learn. Mistakes aren’t failures—they’re stepping stones. Whether it’s a trainee misunderstanding a concept or a mentor misjudging a situation, these moments are chances to grow. They teach us humility, patience, and resilience.

Rachel’s talk is a reminder that no one is a perfect mentor right out of the gate. The process of becoming a great mentor is filled with trial and error, reflection, and growth. Also, Imposter Syndrome is very real and it can be a doozy to overcome.  Ultimately, the key takeaway is this: mentorship is a journey, not a destination. We will make mistakes along the way, but those mistakes will help shape us into more effective, empathetic, and responsive mentors.

Scaling Tech Work While Maintaining Quality: Why Community is the Key with Katherine Payson (a PNSQC Live Blog)

If someone had told me ten years ago I'd be an active member of the "gig economy" I would have thought they were crazy (and maybe looked at them quizzically because I wouldn't entirely understand what that actually meant. in 2024? Oh, I understand it, way more than I may have ever wanted to (LOL!). Rather than. looking at this as a bad thing, I'm going to "Shift Out" (as Jon Bach suggested in the last talk) and consider some aspects of the gig economy that are helping to build and scale work and, dare we say it, quality initiatives. 

Katherine Payson offers some interesting perspectives:

- The gig economy generates $204 billion globally
- many companies are leveraging and taking advantage of this, including international companies hiring all over the world for specific needs (I know, I did exactly this during 2024)
- In 2023, the anticipated growth rate for gig work was expected to be 17%
- By 2027 the United States is expected to have more gig workers than traditional full-time employees

This brings up an interesting question... with more people involved in gig work, and not necessarily tied to or beholden to a company for any meaningful reasons, how do these initiatives scale, and how do quality and integrity apply?

Strong Community is the approach that Katherine is using and experiencing over at Cobalt, a company that specializes in "pentesting-as-a-service". Cobalt has grown its pool of freelance tech workers to over 400 in three years. That's a lot of people in non-traditional employment roles. So what does that mean? How is trust maintained? How is quality maintained? Ultimately, as Katherine says, it comes down to effective "Community Building".

Today, many businesses are looking for specialized skills, frequently beyond what traditional full-time employees and employment can do. Yes, AI is part of this shift but there is still a significant need for human expertise. As Cobalt points out, cybersecurity, software development, and other technical fields definitely still require human employees with a very human element to them. What this means is that there is a large rise in freelance professionals actively offering niche talents on a flexible, on-demand basis (likely also on an as-needed basis both for the companies and the gig workers themselves). Again, the bigger question is "Why should a gig worker really care about what a company wants or needs?"

Community can be fostered directly when everyone is in the same town, working on the same street, going to the same office. When Cobalt first began scaling, they relied on a traditional trust model that worked well for a smaller, more centralized team. As the number of freelancers grew, however, this model began to show its limitations. Without a more robust system in place, it would be impossible to ensure consistent quality across a distributed workforce.


Tools can go a certain distance when it comes to helping manage quality and production integrity but more to the point, developing actual communities within organizations is another method for helping develop quality initiatives that resonate with people from all involvements in their organizations.

Cobalt prides itself as a company that is able to maintain quality at scale. It claims to create a culture where freelancers feel connected, supported, and motivated to deliver their best work. So how does Cobalt do that?

Collaboration and Communication: Freelancers can work independently, but they don't work in isolation. Cobalt believes in open communication, where freelancers can collaborate with one another, share knowledge, and learn from each other’s experiences.

Mentorship and Professional Development: Cobalt invests in the professional growth of freelancers. Mentorship opportunities, training programs, and access to industry resources help their freelance community continuously hone their skills.

Recognition and Incentives: High-performing freelancers are recognized and rewarded for their contributions. This helps retain top talent and encourages others to aim for top-quality work.

Feedback Loop: Freelancers receive regular feedback on their work, helping them improve and keep quality high across the board.

As the gig economy continues to grow, maintaining quality at scale will become increasingly important everywhere. Cobalt aims to embrace the strengths of their freelance workforce, not just as individual contributors but as part of a larger community. Scaling with freelancers is not just about hiring more people—it’s about building a culture of collaboration, growth, and trust. To ensure quality remains front and center, companies need to invest in their communities every bit as much as much as they do in their tools and processes.

Shifting Out: Beyond Left and Right in DevOps with Jon Bach (a PNSQC Live Blog)

If you have any involvement in testing or DevOps we hear a bunch about Shift-Left and Shift-Right. The idea is that we bring testing earlier in the development of the product and that we continue to test after the product is released. But what if there’s a third dimension, one that helps us make sure we are testing where most effective and needed, along with making the most of our testing resources? Jon Bach wants us to consider "Shifting Out" along with the familiar options.

Think of Shifting Out less as a directional movement and more of an elevation movement (rise up or levitate might be better words but it's not as memorable as Shift Out, so I get it :) If shift-Left deals with working with or around a particular tree, Shift Out gives you a view of the entire forest. Shifting out is all about your perspective, and along with that, balancing testing resources so that we have placed our attention on the important areas (harder to see when staring at one tree, so to speak ;) ). By logic, we can also consider Shifting In after we have Shifted Out.

Okay, so I get the general idea of Shift Out. How do we do it?

Customer bug reports are an interesting example. Who owns the issue? Where should testing be applied? When? Would Shift-Left or Shift-Right be helpful here? Does it even fall within the Left or Right framework? As we shift out, we start to see that problems don’t always fit neatly into one category—they often require collaboration across multiple teams to resolve. 

Early bug detection is great but the fact is that we can’t always catch bugs early, no matter how much we Shift-Left. Incomplete requirements, unknown user behaviors, and system complexity introduce a lot of unknowns, meaning that some issues are not only unlikely to be found but may be impossible to find until the code goes live. Shifting out acknowledges this and encourages teams to plan for uncertainty rather than simply striving to eliminate it.

What signals are your tests sending? Meaning are your tests highlighting the right problems? What can you learn from the data generated across development, test, and production environments? It's not enough to just run the tests. We need to analyze what the tests are telling us and what data we are getting and analyzing. Additionally, it may make sense to reframe existing tests. This way, we get the information from our original tests but also get additional information by pivoting to a different aspect or need. Same test, with different results, and possibly different conclusions.

Shifting-Out (and Shifting-In) are all about zooming in and out to consider the broader context of our testing strategies. Shift-Left and Shift-Right help focus on specific aspects of the software lifecycle, shifting out lets us step back and see how the pieces fit together. 

Shift OUT is about:

Outlook: getting credible action items and information
Understanding: knowing about the issues and contexts where they fit and what/why it matters
Treatment: knowing where and when the appropriate solution needs to be applied, as well as why it is the most effective

Ultimately, focusing solely on Shift-Left or Shift-Right leads to a tunnel vision effect. Shifting-Out helps with developing perspectives for managing all of the possible quality processes. Catching bugs earlier and learning from production is important. So is understanding the bigger picture and making informed decisions that help balance both.

Exploring Secure Software Development w/ Dr. Joye Purser and Walter Angerer (a live blog from PNSQC)

Okay, so let's get to why I am here. my goal is to focus on areas that I might know less about and can see actionable efforts in areas I can be effective (and again, look for things I can use that don't require permission or money from my company to put into play).

Dr. Joye Purser is the Global Lead for Field Cybersecurity at Veritas Technologies. Walter Angere is Senior Vice President for Engineering at Veritas and co-author of the paper. To be clear Dr. Purser is the one delivering the talk.

Creating secure software involves a lot of moving parts. So says someone who labels herself as "at the forefront of global data protection."

High-profile security incidents are increasing and the critical nature of secure software development is needed more than ever. Because of high-profile cases that end up in the news regularly, Dr. Purser shared her journey and experiences with Veritas, a well-established data protection company. She shared their journey of ensuring software security.

Veritas has a seven-step SecDevOps process, demonstrating how they aim to control and secure software at every stage.

1. Design and Planning: Building security in from the outset, not bolting it on as an afterthought.

2. Threat Modeling: Identifying potential threats and mitigating them before they can become problems.

3. Code Analysis: Veritas uses advanced code analysis tools to identify vulnerabilities early in the process.

4. Automated Testing: Leveraging automation to continuously test for weaknesses.

5. Chaos Engineering: Veritas has a system called REDLab, which simulates failures and tests the system’s robustness under stress.

6. Continuous Monitoring: Ensuring that the software remains secure throughout its lifecycle.

7. Incident Response: Being prepared to respond quickly and effectively when issues do arise.


A little more on chaos engineering. This technique actively injects failures and disruptions into the system to see how it responds, with the idea that systems are only as strong as their weakest points under pressure. Veritas' REDLab is central to this effort, putting systems under tremendous stress with controlled chaos experiments. The result is a more resilient product that can withstand real-world failures

Veritas also focuses on ways to validate and verify that code generation is done securely, along with a variety of ways to stress test software during multiple stages of the build process. The talk also touched on the importance of keeping technical teams motivated. Including examples of role-playing scenarios, movie stars, and innovative content ads a touch of fun and can help keep development teams engaged. 

As technologies evolve, so do the techniques required to keep software safe. Security is needed at every stage of the software development lifecycle. Using techniques like chaos engineering along with creative team engagement has helped Veritas stay at the front of secure software development.

What Do I want to Do With My Next Job? I want to TEST!!!

This was a draft that I never published. It's rare that I do that, to be honest. My drafts folder is very lean but I wanted to hold off on this one. I didn't realize that it would be close to a year before I actually published it. Regardless, I wanted to share some sentiments about looking for work, dealing with that reality, and what my expectations and realities turned out to be. I should mention up-front that this is primarily in the past tense, and I have inserted some new information in a different colored text so you can see what was part of the original draft and what I've added today.

---

Intended Publish Date: 11/16/2023

As fun as it was, and with great thanks to my dear friend Gwen Iarussi who reached out to me with an opportunity that I gladly took, that opportunity has reached its end. 

It's okay. Contracts do that. 

We are asked to accomplish a task or a goal, we go in and we do that, and then when it's all done, we wrap things up with a bow, take a metaphorical bow, and exit stage left. 

Of course, the challenge is that, unless something else is lined up and ready to go, we fall prey to the irregular income cycle. We can be well rewarded for our efforts for a time but we may have lean periods we have to weather between opportunities. Such is life and such is where I'm at right now.

Thus it should come as no surprise that I am now at that point where I am actively looking, trying to figure out what my next move will be, where it will be, and who it will be with. Of course, I am reaching out to any and all with these messages. Today on LinkedIn, I posted this update:


I have had many shares and some wonderful comments from a number of people (And to everyone doing that, thank you very much, it is greatly appreciated) but I want to draw special attention to something Jon Bach did with his reply to me. He said he'd be happy to keep an ear out for me but in classic Jon fashion, he added a little challenge for me. Looking out for me is all well and good but what exactly should he be looking out for? Specifically, what is it I actually want to be doing given the choices.

I gave a reply and then realized there was much more here I wanted to both ponder and put into better thought, so I'm taking the opportunity to take my original reply and expand it here:

That is an excellent question.

I listed above areas that I'm most definitely interested in. You may notice that I didn't mention "test automation"?

There's a reason.

It's not that I *can't* do it or *won't* do it (heck, my last several months have been focused entirely on teaching a class how to use C#, Visual Studio, MSTest, NUnit, and Playwright to do *exactly* that) but I am a weirdo who genuinely enjoys the exploration of testing, the detective work of testing, the journalism of testing. Given my druthers, I would much rather be involved in doing *that*.

I want to encourage the broader adoption and understanding of Accessibility and Inclusive Design. I want to advocate for Testability on products. I want to see a world where people who choose to use a particular set of health and wellness products don't have to force themselves to upgrade and abandon everything they've put together to help them achieve their goals.

I enjoy learning, I enjoy teaching, I enjoy being an advocate. I'm a fan of doing. If that seems squishy and not very well defined, it's a work in progress but honestly, I want to confirm or refute hypotheses, I want to experiment... I WANT TO TEST!!!

Star Date: 10/15/2024

Be careful what you wish for, because you may find out that you both get it and don't exactly get it (LOL!). Much of 2024 had me focusing on technical writing rather than testing. In short, I did very little of what I WANTED to do but I was 100% happy to do the work that I did. I learned a great deal in the process and I found that, perhaps, the greatest thing I could have spent my time doing was putting my own writing under a microscope. More to the point, I had a chance to compare my writing to what many AI tools were generating. I confess I was amused that I was being asked to do a writing gig for an extended period at a time when people were commenting on the idea that AI was going to take all of those writing jobs away. I even asked why they would want to hire me when there were so many AI tools that were out there. The answer was intriguing... "Yes, we have experimented with and worked with the AI tools but they don't sound or feel convincing. We think a real writer and human that is involved in this space will bring insights and emphasis that AI tools do not." I experimented with that idea and decided to do some A/B Testing, prompting sites to help me write versus writing directly. My goal was to see how long it took me to take to have a presentable draft or final product I would be happy with.

Did AI do better than me? Depends on what we mean by "better". Did it outwrite me? No. At least not from a writing quality perspective. Literally, nothing I used AI prompts and generated output from was good to go from the get-go. I had to proofread, fix spelling and grammar, and reword a lot of stuff, as well as examine claims made and verify that they coincided with reality and could be backed up by real-world examples. When compared to my regular approach of writing from scratch, the time to a complete product was about the same in both cases. Where did AI excel? It helped me identify potential areas I might have neglected or had blind spots to. In short, it was a nice nudge to look at areas I might have glossed over or had little personal experience with, and encouraged me to look at those areas. No question, that's an AI win. It didn't do the work for me but it definitely helped me frame areas I was less alert to. I could then decide if those areas made sense to explore and include (many times, it didn't but a few times, it genuinely added to my overall knowledge and experience.

Where do I want to work? I've decided it doesn't really matter where I work as long as I can be effective and useful. I'm fond of saying "I'm a software tester and I can test any software out there". On the surface, that's true. If the goal is to help you make your product more usable, more accessible, more inclusive, and more responsive, I can probably do that with any organization. However, to borrow from Dirty Harry, "A (man) has to know (his) limitations." Odds are, if a biotech company were to look to hire me, they would not be looking for my skills to help make sites accessible. They would be looking for me to investigate how software helps answer biomedical or bioengineering problems.  It's hard to make headway in those areas without previous experience. Not impossible, but I'd definitely be several steps back from people who have already worked in these industries. 


Star Date: 10/15/2024

This is still true. Frustratingly so, in fact, and we can argue all day long about why this is problematic. I fully believe that testers can be effective in many environments but understanding the problem domain is going to absolutely stand any candidate in a better position. Having said that, testers bring many skills to the table and a lot of the skills they bring might not be obvious. Over the past two years, I have worked as the Marketing Chair for PNSQC, the conference I am attending this week. A great deal of my efforts has been to discover how to interpret the data of marketing efforts, trying to make sense of sentiment and expectations, learning about and examining analytics, having debates on the value (positive or negative) of SEO, and what initiatives actually produce engagement and interest. Before my layoff, I had very little understanding of these areas and how I could leverage these in things like my own job search or presenting ideas. Many tech people know about the inner workings of a product but struggle with why someone might actually want to buy or use it. My marketing education has given me significant insights into these areas I didn't have before, as well as how to use them.

I know that the current time will require me to put a lot more attention into seeking a job than I would like to, and the competition for jobs out there is fierce right now. I cannot count the number of jobs I have applied to that at least on the surface make sense to me to apply to, and I either hear nothing back or I am rejected infavor of other candidates, even with my years of experience. I keep seeing comments from people saying "Wait, *you're* having trouble finding a job?!" It's a weird feeling wondering if you are still fighting for work with so much experience. I wish I had an answer for that.

Star Date: 10/15/2024

I'll not be so bold to say I have an answer for this but I will happily share some observations:

- Every tester who is cold applying is going up against everyone else out there applying. You are effectively a number and a score sheet at this point. If a job has 500+ applicants, odds are that, even if you are a 93/100, there are probably a lot of people who are 94/100 or better. If you are outscored by others in some capacity, you are likely not going to get a second look.

- If you know someone at a company you want to work with, reach out to them. If nothing else, they can give you an honest assessment as to whether or not where you are would be what the company is looking for. Additionally, those people, if willing to vouch for you, do put you at a significant advantage over a blind resume. The point is that you now at least have a chance of being seen as a human being as opposed to some output score or metric. Never underestimate the value of that.

- Be willing to reach out to people in your network and start conversations stating you are looking and perhaps be able to tell them some challenge or issue that they have that you could potentially solve. Even if you can't solve their immediate problem, you have shown initiative and interest in what they do. People remember that.

- Be helpful with your network and if you know of a position that someone might be able to be filled by them. Again, even if they don't get hired, they will remember that you went to bat for them in the past. That makes them much more likely to go to bat for you. Note: I'm not saying this cynically, I seriously mean that if you try to help people sincerely and honestly, those people may very well be your best bet to get in front of a hiring manager they know later.

- Ghost jobs are real, I'm sad to say. Many companies post job listings hoping to get resumes from the cream of the crop, and they will keep a job open until they get "the perfect candidates". We can argue all day long as to the value of doing that (there is no such thing as "the perfect candidate", and "the perfect candidate" will quickly get bored with the job they are an exact fit for). The point is, there are job listings that stay open forever, it seems, and get renewed regularly without getting filled. These are fishing expeditions. Just be aware of that.

- Your "passion project" may be what ultimately gets you interviews and a gig but just as often, it's the mundane work that needs to be done that is more readily available. It may not be what you were immediately looking for but being willing to do some mundane work can again open an avenue to discussing the work you really want to do with people who have now seen and experienced your work ethic.
      



Hello Again! Have you Missed Me?

 Every once in a while, I have spent less time updating and participating in my blog than I want to. However, I wasn't aware that I had yet to make any posts since 2023! Part of this has to do with using a variety of other avenues to communicate and being in a mode of looking for work and doing contract work for the duration. It has left me little time or attention for blogging.

That, however, led me to realize that it was violating my prime reason for making this blog in the first place. That was a promise to learn in public and share what I learned along the way. I have learned quite a bit over the past 16 1/2 months since I held my last full-time job and my current gig (more on that later ;) ). I'll be blunt, much of my time and attention has been hunting for work and getting the chance to do interesting work. I've developed course material for teaching manual testers to become SDETs. I co-authored a book with Matt Heusser on "Software Testing Strategies" and while I am happy that it has been well received, I'm realizing I've been terrible about actually talking about it and what's in it. I've worked on a variety of writing assignments, one of which was writing landing page content for a QA company in South America that wanted a more human touch in how their information and services appeared to an English-speaking audience. That was a neat experience and one I learned a great deal about how to improve my writing for a different audience. 

I had several opportunities to develop and deliver talks that are built around the material in our book, which means I have stepped out of my comfortable wheelhouse of Accessibility and random topics and getting to look at some interesting challenges. I talked about the puzzle pieces of good testing and had the chance to deliver them a few times. Matt and I have presented a few times now on Lean Software Testing (which we should mention is not the same or related to the LEAN startup) and it has turned into a great way to look at the way we test and how to help organizations and individual testers improve their overall testing efforts (and often without having to ask for permission ;) ).

As I am at PNSQC this week, my history of doing live blogging to capture my thoughts and impressions still alive and well. I realized that PNSQC 2023 was the last in-person event I attended, and thus it has been a year since I've done these. Apologies or you're welcome, depending on where you fall on that sentiment.

It's good to be back. Let's try to keep this conversation going. 

Monday, December 25, 2023

Software Testing Strategies is NOW AVAILABLE!!!

 I realize that I am terrible at self-promotion at times but I have a very good reason to step up my self-promotion for a change.

I WROTE A BOOK!

OK, now that that's out of my system, Matt Heusser and I co-wrote a book. That's the much more honest answer but hey, my name is on the cover, so I'm claiming that credit ;).

Matt and I spent more than a year working on this project that has become "Software Testing Strategies" and its subtitle is "A Testing Guide for the 2020s". We have endeavored to create a book that is timely and, we hope, timeless. Additionally, we did our best to bring some topics that may not be in other testing books. Through several years of working on podcasts together, writing articles together for various online journals, presenting talks at various conferences, and developing training materials to deliver as training courses as well as online and in-person classwork, we realized we had enough experiences between us to inform and develop a full book.

From our Amazon listing:

Software Testing Strategies covers a wide range of topics in the field of software testing, providing practical insights and strategies for professionals at every level. With equal emphasis on theoretical knowledge and practical application, this book is a valuable resource for programmers, testers, and anyone involved in software development.


The first part delves into the fundamentals of software testing, teaching you about test design, tooling, and automation. The chapters help you get to grips with specialized testing areas, including security, internationalization, accessibility, and performance.

The second part focuses on the integration of testing into the broader software delivery process, exploring different delivery models and puzzle pieces contributing to effective testing. You’ll discover how to craft your own test strategies and learn about lean approaches to software testing for optimizing processes.

The final part goes beyond technicalities, addressing the broader context of testing. The chapters cover case studies, experience reports, and testing responsibilities, and discuss the philosophy and ethics of software testing.

By the end of this book, you’ll be equipped to elevate your testing game, ensure software quality, and have an indispensable guide to the ever-evolving landscape of software quality assurance.

Who this book is for

This book is for a broad spectrum of professionals engaged in software development, including programmers, testers, and DevOps specialists. Tailored to those who aspire to elevate their testing practices beyond the basics, the book caters to anyone seeking practical insights and strategies to master the nuanced interplay between human intuition and automation. Whether you are a seasoned developer, meticulous tester, or DevOps professional, this comprehensive guide offers a transformative roadmap to become an adept strategist in the dynamic realm of software quality assurance.


Table of Contents

  1. Testing and Designing Tests
  2. Fundamental Issues in Tooling and Automation
  3. Programmer-Facing Testing
  4. Customer-Facing Tests
  5. Specialized Testing
  6. Testing Related Skills
  7. Test Data Management
  8. Delivery Models and Testing
  9. The Puzzle Pieces of Good Testing
  10. Putting Your Test Strategy Together
  11. Lean Software Testing
  12. Case Studies and Experience Reports
  13. Testing Activities or a Testing Role?
  14. Philosophy and Ethics in Software Testing
  15. Words and Language About Work
  16. Testing Strategy Applied
Sound interesting? If so, please go and visit the link and buy a copy. Have questions? Want us to delve into some of the ideas in future articles here? Leave a comment and let's chat.

Friday, October 27, 2023

Feeding on Frustration: The Rise of the "Recruiter Scam"

 This is truly not an article I wanted to write, but my hope is my experience may help some people out there.

To put it simply, I have been applying for a variety of jobs because, well, that's what you do when you are between jobs. I have, for the past several months, been working with an organization performing training for a cohort of learners. That started at the beginning of June, and it has recently been completed. With the classes finished, I am now the same "free agent" I was in May.




Thus, it should come as no surprise that I am applying for the jobs that are being posted and that I feel might make for a good fit. Additionally, this is part of my certifying for unemployment benefits. You have to show a paper trail of the companies you are applying to and demonstrate your active job search and the results of that search. Thus, I am making several inquiries each week. It's not surprising that the deluge of messages one gets when they are actively involved in this process makes it difficult to determine what is legitimate and what might be a scam.

Last weekend, as I was working through some things while waiting in my car to get an oil change, I received a message saying that they had reviewed my application and wanted to "short-list" me for interviews and potential hiring. To help with that, they sent me a questionnaire to fill out. I've done many of these, so I didn't at first think anything of it, though as I worked my way through the questions, I started to think, "Wow, this is pretty cool. So many of these questions feel almost tailor-made for me." Part of me was getting suspicious, but I thought, "Ah, you're being that paranoid tester again. It's not like there's anything in here they're asking that weird or harmful." So I decided to submit it.

A few days go by, and I receive an email message saying, "Congratulations! We are pleased to offer you the job of Remote Quality Assurance Engineer at (Company). To facilitate a formal job offer, please provide us with the following (full name, address, phone number, and email)". Again, at first, it seemed logical, but then... hang on... if they have my resume, it already contains all of that information. Why would they need to have me send it again? Now my tester spidy sense is tingling. This is starting to feel like a scam. Do I disengage at this point, or do I see if I can catch them red-handed?

I figured, "What the heck? Let's roll with it". My name, address, phone number, and email are readily available. We can discuss if that is an intelligent practice another time. In this case, I figured, "Let's go with it."

I received an offer letter. The company looks legitimate. It's a company I applied for. The job description looks beautiful. It matches all of the items I would be looking for... all of them. Now, for anyone who has applied for a job, have you ever seen a job description that was a perfect 10/10, or in this case, a perfect 13/13? Everything felt tailor-made for me. The pay rate also felt right in the pocket. However, here's where things started to go sideways.

"We will send you a check so that you can procure the needed equipment from our preferred vendors. Once you are set up and have everything in place, we can start the necessary training and get you up to speed. We can set up the payment for this procurement by direct deposit, or we can send you a check."   

Ohhhhh, yeahhhhhh!!! Now they are feeling confident (LOL!). 

They have someone willing to give them sensitive information. Did I mention that with the signed cover letter, I was to also send them a copy of my driver's license, front and back? I understand the idea of verifying identity and ability to legally work, but that's what I-9 verification services are for. They are also secure entities. I am not sending my license details over email. With this, I was pretty certain that I had a scammer. Thus I went and did the next things that felt obvious to me. I went back to look up the company and determine if the information they were sending me was accurate. Company name? Checks out. Address? Yes, accurate. Let's do a little search on the name of the person recruiting... oh, would you look at that? There is no LinkedIn profile for this person associated with this company. Hmmm, let's see their job listings... okay, there's the Quality Assurance Engineer's job listing. A quick review... now that's interesting. These are not the same requirements they sent to me. Not only that, but that perfect 13/13 job match was now reduced to an 8/13, with a few of the requirements that I was qualified for not even in the listing, and a few additional items that were not aligned with what I was working with. Yeah, that's a lot more typical. Also, the pay rate was lower than what the scammer was advertising.

With that, I scanned to see who the company listed as their official recruiters and I reached out to them via LinkedIn and simply asked if they were familiar with the individual who contacted me and if they were aware of the odd request to send me a check to buy equipment.  The net result was that, less than an hour later, I saw a post from the company warning people to steer clear of any email communications from one "Maxwell Keen" as they were posing as a recruiter for the company but did not nor had they ever worked with them.

All's well that end's well, right? We caught the scammer, I reported them, and now that's all done, right? Maybe, but I have a feeling that this person is still out there and probably looking for their next target, so with that, consider these some quick safeguards you an take.

- If you need to keep track of your job search, create an intermediate table in Excel or elsewhere that stores the information about the job and who you are communicating with, if possible. At the very least, review the job descriptions on LinkedIn and on their site and verify that they match.

- If there is a contact information space, note it down, especially if there is a contact person with a phone number. You don't need to contact them immediately, but you will want this information should you receive a reply back.

- Getting a questionnaire is fairly standard but it also makes it easy to "cheat" and write down the answers you search for. Again, it's not the most red of flags but I'd argue it's also not very helpful so be leery of anyone sending these and not asking for a phone call/screening.

- If you get an offer for a job where there has been no interview or phone screen or a direct conversation with a human being (either over Zoom or in person), expect that this is probably a scam of some sort. Otherwise, how are they vetting these people?

- Look to make sure that, if you receive an offer letter, there are no misspellings in the document. It's a simple thing, and perhaps petty, but offer letters have a fair amount of boilerplate text for legal purposes. Any legal document will be fine-toothed for any grammatical errors or misspellings. There may be some grammar variation but misspelled words should automatically give you pause.

- Any reputable company will either work with you to set you up with VPN or other security details to use your equipment as is or they will ship you out a system set up with the software they expect you to use. Being asked to receive a check to procure equipment is an indication that something illegal or shady is happening.

- References are something worth having and including upon request. as my friend Timothy Western pointed out, though, if they are asking for them too quickly or at the very beginning of the process, hold off on providing those. They may be harvesting that information from your references to target them. 

Some additional items you can do that should help determine if you are dealing with a reputable recruiter or a scammer:

- Look up recent news about the company to understand its current market and technical position and future outlook. Discussing the latest product launches, partnerships, or corporate changes can help flush out what they know or don't know about the company.

- Read up on employee testimonials on sites like GlassDoor and see if they match what the recruiter is telling you. While this may not necessarily tip you off if they're a scammer, it will help give you some inside perspectives on working conditions and employees' perspectives on their work culture.

- If possible, try to connect with current or past employees who can offer firsthand insights into the company. definitely see if there is a secondary recruiter there who can at least confirm the interactions you are having.

- If publicly available, review financial reports to assess the company's stability. Ask them some questions to determine what they might know and if their answers corroborate or refute your findings. 

Finally, make sure that everything you see in any communications can be traced back to interactions you initiated and make sense/match the experience you started with. 

Do not trust. Absolutely verify. 

Many of us are struggling with the reality of needing to find work. Let's do what we can to stop these parasites from making this already challenging search even more so.

Tuesday, October 10, 2023

Empathy is a Technical Skill With Andrea Goulet (PNSQC)

 


Today has been a whirlwind. I was up this morning before 5:00 a.m. to teach the penultimate class of my contract (sorry, I just love working that word into things ;) ) but suffice it to say the class is still happening while PNSQC is happening. That has made me a little tired and thus a little less blogging today. Add to that the fact I was called in to do a substitute talk in the afternoon (glad to do it but that was not on my dance card today) and I'm really wondering how we are at the last talk of the day and formal conference. regardless, we are here and I'm excited to hear our last speaker.

I appreciate Andrea talking about this topic, especially because I feel that there has been a lot of impersonal and disinterested work from many over the past several years. I was curious as to what this talk would be about. How can we look at Empathy as a technical skill? She walked us through an example with her husband where he was digging into a thorny technical problem that was interrupted by Andrea asking him for a moment. His reaction was... not pleasant. As Andrea explained, she realized that he was deeply focused on something so all-consuming that it was going to be a big deal to get his attention for needful things. Instead of it being an ugly altercation, they worked out a phrase (in this case, "Inception") to help see when a person is on a deep dive and needs to be in their focused state, at least for a little while longer. While I don't quite know that level of a dive, I have times in my own life when I get caught up in my own thoughts and I bristle when someone interrupts/intrudes. By realizing these things, we can not just recognize when we ourselves are focusing on deep dives, but we can also recognize when others are as well. This is a development of our own empathy to aid us in the process of understanding when people are dealing with things.


Okay, that's all cool, but why is this being called a technical thing? Because we are free and loose with the use of the word "technical". Technical comes from the Greek word "Techne", and techne means "skill". That means any skill is technical when we get down to it. It also means it's a skill that can be learned. Yes, we can learn to be empathetic. It's not something we are born with, it's something we develop and practice. Motivation likewise drives empathy. In many ways, empathy can be a little mercenary. That's why we get it wrong a lot of the time. We often want to reach out and help in ways that we would want to be helped, and thus our empathy is highly subjective and highly variable. Additionally, empathy grades on a curve. There are numerous ways in which we express and experience empathy. it's not a monoculture, it is expressed in numerous ways and under different circumstances and conditions. There are a variety of components, mechanisms, and processes that go into our understanding and expressions of empathy. It's how we collaborate and solve complex problems. In short, it's a core part of our desire and ability to work together.

Andrea showed us a diagram with a number of elements. We have a variety of inputs (compassion, communication) that drive the various mechanisms that end up with a set of outputs. Those outputs come down to:

  • Developing an understanding of others 
  • Creation of Trust
  • A Feeling of Mutual Support
  • An overall synergy of efforts   

 Empathy requires critical thinking. It's not all feelings. We have to have a clear understanding and rational vision of what people want, and not specifically what we want. 

On the whole, this is intriguing and not what I was expecting to hear. Regardless, I'm excited to see if I can approach this as a developed skill.