I've been struggling lately with the fact that each of our teams does stuff a little bit differently. There's nothing necessarily wrong with that but it does make it a challenge in that one tester on our team would probably struggle to be effective with another team. we have a broad variety of software offerings under one roof and many of those products were acquired through, you guessed it, acquisitions (I mean, how else do you acquire something ;) ).
Point being, there are a variety of tools, initiatives, and needs in place for each team, mainly because each of our teams originated in a different place but also because each team did some work and adopted processes before they were picked up by the main company.
I'm sure I've explained this over the years but Socialtext, the company I worked for starting in 2012, was acquired by PeopeFluent. PeopleFluent had acquired a host of other companies along the way, as well as having their own core product. A few years ago, PeopleFluent itself was acquired by Learning Technologies Group (LTG) in the UK. Additionally, as of the past year, I now work with the team that literally is a specialty team in the middle that tries to make it possible for each of the teams to make it possible to play nice with everyone else (i.e. the Transformations or Integrations team). The neat thing is that there are a variety of products and roles to work with. the biggest challenge is there's no real lingua franca in the organization. Not for lack of trying ;). At the moment, we are as a company trying to see if we can standardize on a platform and set of languages. This is a process and I predict it will take a while before it becomes company-wide and fully adopted if it ever actually is (note to my company, that's not a dig/criticism, just my experience over thirty years of observing companies. I'm optimistic but realistic, too ;) ).
That's just looking at the automation landscape. That does not include the variety of manual test areas we have (and there are a lot of them). Each organization champions the idea of 100% automated testing. I don't particularly but again, I also don't worry about it too much because I don't believe there is such a destination to arrive to. There is always going to be a need for Exploratory Testing and as such there will always be a need and focus for manual testing.
What this ultimately means is that we will likely always have a disjointed testing environment. There will likely never be "one ring to rule them all" and because of that, we will have disparate and varied testing environments, testing processes, and testing results. How do we get a handle on seeing all of the testing? I'm not someone who has a particular need for that but my manager certainly is. my Director certainly is. They have a need to get a global view of testing and I don't envy their situation.
Whew, deep breath.... that's why I'm here in this take, to see how I might be able to help get a better handle on all of the test data and efforts and see how we can get the best information in the most timely fashion.
Joel's talk is about "Orchestrating the Testing Process". For those not familiar with music notation and arrangement, orchestration is the process of getting multiple instruments to work together and work off of the same score sheet (in this case, score meaning the notation for each and every instrument in a way that everyone is playing together when warranted and in the right time when called for). Testing fits into this area as well.
So what do we need to do to get everyone on the same page? Well, first of all, we have to realize that we are not necessarily even trying to get everyone on the same page in the literal sense. They need to work together, and they need to be understood together but ultimately the goal of orchestration is that everyone works together, not that everyone plays in unison or even in close harmony.
Orchestration implies a conductor. A conductor doesn't play every instrument. generally speaking, a conductor doesn't play any instruments at all. They know where and when the operations need to take place. This may be through regular status meetings or it may be through pipeline development. It may also mean that refactoring of testing is as important as creating testing. test Reporting and gathering /distilling that information becomes critical for successful conducting/orchestration.
Is there a clean and elegant solution for this? No, not really, it's a process that is hands-on and requires coordination to be effective. as a musician, I know full well that to write hits, we have to just write a lot of songs. Over time, we will get a little bit better at writing what might hit and grab people's attention. Even if writing "hits" isn't our goal, writing songs ultimately is. that means we need to practice writing songs. The same goes for complex test environments. If we want to orchestrate those efforts, we need to write our songs regularly and deliberately.
[updating content, please refresh to see the latest]
No comments:
Post a Comment