Hey! Finally, I'm able to work that silly line a little more literally ;). Yesterday was the workshop day for CAST, but today is the first full general conference day, so I will be live blogging as much as I can of the proceedings, if WiFi will let me.
CAST 2014 is being held at the New York University Kimmel Center, which is just outside of the perimeter of Washington Square. the pre-game show for the conference (or one of them, in any event) is taking place at the Marlton Hotel in the lobby. Jason Coutu is leading a Lean Coffee event, in which all of the participants get together and vote on topics they want to talk about, and then discussion revolves around the topics that get the most votes.
---
This morning we started out with the topic of capacity planning and the attempt to manage and predict how to plan for capacity planning. Variance in teams can be dramatic (a three person test team is going to have lower capacity than a fifty or one hundred person team. One interesting factor is that estimation for stories is often wrong. Story points and stuff like that often regress to the mean. One attendee asked "what would happen if we just got rid of the points for stories altogether?" Instead of looking at points for stories, we should be looking at ways to get stories to the same size in general. It may be a gut feeling, it may be a time heuristics, but the effort may be better suited to just making the stories smaller, rather than get to involved in adding up points.
Another interesting measurement is "mean time to fix the build". Another idea is to see which files get checked out and checked in most frequently to see where the largest amount of modifications are taking place and how often. Some organizations look to measure everything they can measure. One quip was "are they measuring how much time they are spending measuring?". While some measurements are red herrings, often there are valid areas that it makes sense to measure and learn what is needed to remedy. A general consensus is that the desire to get lots of finely granulated measurements is less effective than just targeting effort to fix issues and getting the release to be stable and fixing the issues as they happen.
---
Another topic is the challenge of what happens when a company has devalued or lost the exploratory tester skill set due to focusing on "technical testers". A debate came up to see what "technical tester" actually means, and in general, it was agreed that a technical tester is a programmer that writes or maintains automated testing suites, and that they meet the same level/bar that the software engineers meet. The question is, what is being lost by having this be the primary focus? Is it possible that we are missing a wonderful opportunity to work with individuals who are not necessarily technical, or that is not their primary focus. I consider myself a somewhat "technical tester", but I much prefer/enjoy working in an environment where I can do both technical and exploratory testing. A comment was raised that perhaps "technical tester" is limiting. Technically aware might be a better term, in that the need for technical skills is rising everywhere, not just in the testing space.
---
The last topic we covered was "testing kata" and this is a topic that is of great interest to me because of thoughts that we who are instructors in Miagi-do have been considering implementing. My personal desire is to see the development of a variety of kata that we can put together and use in a larger sphere. In martial arts (specificially, Aikido), there is the concept of "Randori", which is an open combat scenario where the participant has multiple chalengers, and needs to use the kata skills they have learned. The kata part, we have a lot of examples. The randori, that's an open area that is ripe for discussion. The question is, how to put it into practice? I'd *love* to hear other people's thoughts on that :).
---
After breakfast, we got things started with Keith Klain welcoming everyone to the event, and Rich Robinson explaining the facilitation process. As many know (and I guess a few don't ;) ) CAST uses a facilitation method that optimizes the way that the audience can participate. The K-cards that we give out let people determine how and where they can question and make sure everyone who wants a say can get their say.
James Bach is the first keynote, and he has opened up with the reality that talking about testing is very challenging. It's hard enough to talk to testing with other testers, but talking to people who are not testers? Fuhgeddaboudit! Well, no, not really, but it sure feels that way. Talking about testing is a challenging endeavor, and very often, there is a delayed reaction (Analytical Lag Time) where the testing you do one day comes together and gives you insights an hour or a day later. These are maddening experiences, but they are very powerful if we are aware of them and know how to harness them. The title of James' talk is "Testing is Not Test Cases (Toward a Performance Culture)". James started by looking at a product that would allow a writer working on a novel the change and modify "scene" order. The avenues that James was showing looked like classic exploratory techniques, but there is a natural ebb and flow to testing. The role of test cases is almost negligible. The thinking process is constantly evolving. In many ways, the process of testing is writing the test cases while you are testing. Most of the details of the test cases are of minor importance. The actual process of thinking about and pushing the application is hugely complex. An interesting point James makes is that there is not test for "it works". All I know for sure is that it has failed at this point in time.
The testing we perform can be informed by some basic ideas, some quick suggestions, and then following the threads that those suggestions give to us. Every act of testing (genuine testing) involves several layers. there's a narrative or explanation, there's an enactment of test ideas, there's the knowledge of the product that we have. there's the tester's role and integration in the team, there's the skill the tester brings to the table, and there's the tester's demeanor and temperament. All of these aspects come to play and help inform us as to what we can do when we test.
The act of testing is a performance. It can't truly be automated, or put into a sequence of steps that anyone can do. That's like expecting that we can get a sequence of steps so that anyone can step in and be Paul Stanley of KISS. We all can sing the lyrics or play the chords if we know them, but the whole package, the whole performance, cannot be duplicated, not that there aren't many tribute band performers that really try ;).
James shared the variety of processes that he uses to test. He shared the idea of a Lévy Flight, where we sample and cover a space very meticulously, then we jump up and "fly" to some other location and then do another meticulous search. the Lévy Flight heuristic is meant to represent the way that birds and insects scour areas, then fly off at what looks like a random manner, and then meticulously searching again for food, water, etc. From a distance, it seems random, but if we observe closely, we see that even the random fly around is no random at all, but instead it's a systematic method of exploration. Other areas James talks about are modeling from observations, factoring based on the product, experiment design and using tools that can support the test heuristic.
James created a "spec" based on his observations, but recognizes that his observations could be wrong, so he will look to share these options with a programmer to make sure that his observations match the intended reality. there is a certain amount of conjecture here. Because of that, precise speech is important. If we are vague, we can give leeway for others to say "yes, those are correct assumptions of the project". the more specific, the less likely that wiggle room will be there. Issues will be highlighted and easier to confirm as issues if we are consistent with the terms we use. The test cases are not all that interesting or important. The "testers" spec and questions we develop and present at the end is it. However, just as Paul Stanley singing "Got to Choose" at Cobo Hall in Michigan in 1975, the performance of the same song in Los angeles in 1977 will not sound exactly the same. Likewise, a testing round a week later may produce a totally different document,with similarities, but perhaps fundamental differences, too.
Does this all mean that test cases are completely irrelevant and useless? No, but we need to put them in the hierarchy they actually belong. There is a level of focus and ways that we want to interact with the system. Having a list of areas to look at so as to not forget where we want to go certainly helps. Walking through a city is immensely more helpful if we have a map, but it's not entirely essential. we can intuit from street names and address numbers, we can walk and explore, we can ask for directions from people we meet, etc. Will the map help us get directly where we want to go? Maybe. Will following the map show us everything we want to see? Perhaps not. Having a willingness to go in various directions because we've seen something interesting will tell us much more than following the map to the latter. So it is with test cases. They are a map. They are a suggested way to go. They are not *THE* way to go.
Ultimately, it comes down to the fact that testing is a performance. Fretting about test cases gets in the way of the performance. Back to watching KISS (or The Cure of The Weeknd if we want to be more inclusive), they have songs, they have lyrics, they have emotive passages, but at the end of the day, an instance of a performance can be encoded, but it represents only one instance of time. Every performance is different, every performance is on a continuum. You can capture a single performance, but not al of the performances that can be made. Cases can guide us, but if we want to perform optimally, we have to get beyond the test cases. We can capture the actual notes and words. There is no automation for "showmanship" ;).
This comes down to Tacit and Explicit Knowledge. I remember talking about this with James when we were at Øredev in Sweden and we talked about how to teach something where we cant express the words. There is a level of explicit knowledge that we can talk about and share, but there's a lot of stuff buried underneath that we can't explain as easily (the tacit knowledge). Getting to the point of transferring that tacit knowledge gets to experience and shared challenges. Most important, it goes to actively thinking about what you are looking at and doing what you can to make for a performance that is both memorable and stands up to scrutiny.
---
As in all of these sessions, there are so many places to go and talks to see, that it is difficult to make decisions on what to see. For that purpose, I am deliberately going to let the WebCAST stream speak for itself. If you can view the WebCAST presentations, please do so. After the recordings are available they will be posted to the AST Channel for later viewing. For that reason, I am going to focus on sessions that I can attend that are not going to be recorded, as well as are relevant to my own interests and aspirations. With that, I was happy to join Alessandra Moreira (@testchick) and her talk "My Manager would Never Go For That", or more succinctly, how to apply context-driven principles to the art and act of persuasion. I always think of the scene in the film "Amadeus" where Mozart is trying to convince the Emperor to let him go forward with the staging and presentation of "The Marriage of Figaro". The Emperor at one point says "You are passionate, Mozart, but you do not persuade". This is a key reminder to me, and I'm guessing Ale is very familiar with this. Being passionate is not enough, we have to persuade others to see the value in what we are passionate about.
Sometimes this comes down to a decision of "should I stay or should I go?". Do I change my organization or do I change my organization? Persuasion may or may not come about, but the odds are, we can do a better job of persuading if we are ourselves willing to be persuaded. Conversations are two way streets. We learn new things all the time. Are we willing to adapt our position given new information? If not, why not? If we think that our way is the best way, and we are not willing to bend with what we are told, why should anyone else be persuaded by us? Influence is not coercion, it's not manipulation, it's a process of guiding and suggesting, offering information and examples, and "walking the walk" that we want to influence in others. there is a three step process in the art of persuasion. First, we need to discover something that we feel is important. Second, we need to prepare, get our ducks in a row so to speak. we need to know and have supporting evidence that we understand what we are doing and that we have a compelling case. From there, we then need to communicate and embark on an honest and frank dialog about what we want to see be an outcome.
In my own experience, I have found that persuasion is much easier if you have already done something on your own and experienced success with it. Sometimes we need to explore options on our own, and see if they are viable. Perhaps we can find one person on our team who can offer a willing ear. I have a few developers on my team who are often willing to help me experiment with new approaches, as long as I am prepared to explain what I want to do and have done my homework up front. People are willing to give you the ability and the benefit of the doubt if you come prepared. If you can present what you do in a way that the person who needs to be persuaded can be convinced that you have worked to be ready for them to do their part, they are much more likely to go along with it. Start small, and get some little successes. that will often get these first few "adopters" on board with you, and then you can move on to others. Over time, you will have proven the worth of your idea (or had it disproved), and you can move forward, or you can refine and regroup to try again.
I'm fortunate in that I have a Manager who is very willing to let me try just about anything if it will help us get to better testing and higher skill, but it's likewise important to do my homework first, as it helps to build my credibility on the topic at hand. Credibility goes a long way to helping persuade others, and credibility takes time to build. With credibility comes believability, and with believability comes a willingness to let you try an idea or experiment. If the experiments are successful in their eyes, they will be more likely to let you do more in those areas you are aiming to persuade. If the experiments fail, do not despair, but it may mean you have to adapt your approach and make sure you understand what you need to do and how it fits in your own organization.
One of the key areas that people fail on when it comes to persuasion is "compromise". Compromise has become a bad word to many. It's not that you are losing, it's that you are willing to work with another person to validate what they are thinking and to see what you are thinking. It also helps to start small, pick one area, or a particular time box, and work out from there.
---
During the lunch break, Trish Khoo stepped on stage to talk about the ideas of "Scaling Up with Embedded Testing", where Trish described a lot of the testing efforts she had been involved in, where the code that she had written was not regarded by any of the programmers, since they felt what she was doing was just some other thing that had to be done so the programmer could do what they needed to do. fast forward to her working in London, where the programmers were talking about how they would test the code they are writing. This was a revelation to her, because up to that point, she had never seen a programmer do any type of testing. Since many of the efforts that she was used to doing were now being taken care by the programmers, that made her role more challenging, so she had to be a lot more inquisitive and aggressive in looking for new areas to explore.
We often think of the developer and tester being responsible for finding and fixing bugs, and the product owner and the tester are responsible for verifying expectations. The bigger challenge is that, with these loops that we enter, we end up chewing up hours, days and weeks constantly going through these cycles of finding and fixing bugs and verifying expectations. Interestingly, when we ask developers to focus on writing tests to help them write better code, the common answer is "yeah, that makes sense, but we don't have time to do that now, we'll do that next sprint", and then they say the same thing next time, if it comes up again. How do we convince an organization to consider a different approach to having developers get more involved in test? She looked to a number of different organizations to see how they did it.
One of the people Trish talked to was Elisabeth Hendrickson of Cloud Foundry/Pivotal. Interestingly, Cloud Foundry does not have a QA department. That's not to say that they do not have testers, but they have programmers who test, and testers who program. There is no wall. Everyone is a programmer, and everyone is a tester. Elisabeth has a tester on the team by the name of Dave Liebreich (Hi Dave ;) ). While he is a tester, he also does as much testing and the programmers, and as much code writing as the programmers.
Another person she talked to was Alan Page of Microsoft (Hi, Alan ;) ). Some of the teams at Microsoft has moved to a model where everyone has dispensed with job titles. Ask Alan what his job title is, he'll say "generic employee" or if pushed, "software engineer". The idea is that they are not confined to a specific specialty. The goal is that, instead of having people in roles, they open up the opportunities for people to do what their skill set and passion provides. the net process is that managers are orchestrating projects based on skill. Instead of hiring "testers who code", the are looking to hire "people who can solve problems with code". The idea that tester is a role is not relevant, everyone codes, everyone tests.
The third case study was with Michael Bachman at Google. In a previous incarnation, Google would outsource a lot of the manual testing with vendors, mostly to look at the front end UI. Much of the coverage that the testers were addressing was ignoring about 90% of the code in play. For Google to stay competitive, they opted to change their organization so that Engineering owned quality as a whole. There was no QA department. Programmers would test, and there was another team called Engineering Productivity, who helped to teach about areas of testing, as well as investing in Software Engineers in Test (SET), who could then help instruct the other programmers in methods related to software testing. The idea with Google was that "Quality is Team Owned, not Test or QA Owned".
What did they all have in common? Efficiency was the main driver. Teams that have gone to this model have done so for efficiency reasons. there are lots of other words associated with this (Education, Feedback, Upskill, Culture, No Safety Net, etc.). One word that is missing, that I'd be curious to see, is effectiveness. Overall, based on the presentation, I would say that effective was also part of this process. Efficiency without effectiveness ultimately will cause an organization to crash and burn. therefore, that means there is value in these changes.
So what does that mean for me as a tester? It means the bar has been raised. We have some new challenges, and we should not be afraid to embrace them. Does that mean that exploratory testing is no longer relevant? Of course not, we still explore when we develop tool assisted testing. We do end up adding some additional skills, and we might be encouraged to write application code, too. As one who doesn't have a "developer" background, that doesn't automatically put me at a disadvantage. It does mean I would be well served to learn a bit about programming and getting involved in that capacity. It may start small, but we all can do some of it if we give it a chance. We may never get good enough at it that we become full time programmers, but this model doesn't really require it. Also, that's three companies out of tens of thousands. It may become a reality for more companies, but rather than be on the tail end of the experience and have it happen to you, perhaps it may help to get in front of the wave and be part of the transition :).
---
The next session I opted to attend was about "The Psychology and Engineering of Testing" which was being presented by Jan Eumann and Ilari Aegerter. Both Jan and Ilari work with eBay, but they are part of the European team, and the European market and engineering realities are different than what goes on in Silicon Valley. There is a group of testers based on London and Berlin that gets software from the U.S. to test, while the European team has software testers embedded into the development team.
Who works in an embedded tester in an Agile team? Overall, they look for individuals with strong engineering skills, but they also want to see the passion, interest and curiosity that helps make an embedded tester formidable. the important distinction that the embedded testers at Europe eBay are not thinking of programming and testing as either/or, but "as well as". They are encouraged to develop a broad set of skills to help solve real problems with the best people who can solve the problems, rather than to focus on just one area.
When Jan and Ben Kelly were embedded within the European teams, there was an initial experience of Testers vs. Programmers, but over time, developers became test infected, and testers became programming savvy along with it. this prompted other teams saying "hey, we want a tester, too". In this environment, testers and programmers both win.
Though there are integrated teams, the testers still report to testing managers, so while there are still traditional reporting structures, there is a strong interconnected sense between the programmers and testers in their current culture. The Product Test Engineering team has their own Agile manifesto that helps define the integration and importance of the role of test, and how it's a role that is shared through the whole team. If the goal of an embedded tester is to be part of a team, then it makes sense, in Jan's view, to be with the team in space, attitude and purpose. Sitting with the programmers, hanging with the programmers, meeting with the programmers, all of these help to make sure that the tester is involved right from the start.
Additionally, testers can help tech programmers some testing discipline and an understanding of testing principles. Testers bring technical awareness of other domains. They also have the ability to help guide testing efforts in early stage development and help inform and encourage areas that can be set up where programmers might not do so were there not a tester involved. It sounds like an exciting place to be a part of, and an interesting model to aspire to.
---
I love the cross pollination that occurs between the social sciences and software testing, and Huib Schoots has a talk that addresses exactly that.
We often confuse software testing and computer science as though they are hard sciences like mathematics or physics or chemistry. They have principles and components that are similar, but in many ways, the systems that make software are more akin to the social sciences. We think that computers will do the same thing every single time in exactly the same way. fact is, timing, variance, user interactions, congestion and other details all get in the way of the specific factors that would make "experiments" in the computer science domain truly repeatable. I've seen this happen in continuous integration environments, where a series of tests that ran at one time worked the second time they were run, without changing any parameters. One caused the first one to fail and the second one to pass. there can be lots of reasons, but usually they are not physics or mechanical details, but coding and architectural errors. In other words, people making mistakes. Thus, social rather than hard sciences.
Huib shifted over to the ideas in the book "Thinking Fast and Slow", in which simple things are calculated or evaluated very quickly, and other more complicated matters require a different kind of thinking. Karl Mark developed theories about how people should interact, and while the theories he prescribed have been shown to not be ideal, they are still based on the realities of how humans interact with one another. The science of Sociology informs many aspects of the way that we work and interact with others, which generally informs our designs of systems. Levy Strauss represents Anthropology, which deals with the way that different cultures are structured and the parameters that environmental factors that help to inform those options. Maria Montesorri represents Didactics and Pedagogy, aka learning and the methodology that helps inform how we learn. He used his girlfriend to represent Communication studies, and the fact that the way we talk to one another informs the way we design systems, because the communication aspect is often what gets in the way of what an application does (or should I say, the inability to communicate smoothly gets in the way).
Science and Research are areas that inform a great deal of what a software tester actually does. Sadly, very few software testers are really familiar with the scientific method, and without that understanding, many of the options that can help inform test design is missing. I realized this myself several years ago when I stopped considering just listing out a long series of lines of test cases as being effective testing. By going back and considering the scientific method, it gave me the ability to reframe testing as though it were a scientific discipline in and of itself. However, we do ourselves a tremendous disservice if we only use hard science metaphors and ignore the social sciences and what they inform us of how we communicate and interact.
We focus so much attention on trying to prove we are right. that's a misnomer. we cannot prove we are right in anything. We can disprove, but when we say we've proven something, we say we have not found anything that disproves what we have seen. Over time, and repeated observation, we can come close to saying it is "right", but that's only until we get information that disproves it. The theory of gravity seems to be pretty consistent, but hey, I'll keep an open mind ;).
Humans are not rational creatures. We have serious flaws. We have biases we filter everything through. We are emotional creatures. We often do things for completely irrational reasons. We have gut feelings that we trust, even if they fly in the face of what would be considered rational. Sometimes they are write, and sometimes they are wrong, yet we still heed them. Testing needs to work in the human realm. We have to focus on the sticky and bumpy realities of real life, and our testing efforts likewise have to exist in that space.
---
Martin Hynie and Christin Wiedemann focused on a talk that was all about games. Well, to be more specific, "Why testers love playing – Exploring the science behind games". Games are fundamental to the way that we interact with our environment and the way we interact with others. Games help us develop cognitive abilities. The more we play, the more cognitive development occurs. This reminds me a lot of the work and talks I have seen given by Jane McGonigal and how gaming and game culture effects both our thinking and our psychological being.
OK, that's all cool, but what does this have to do with testing?
Testing is greatly informed by the way we interact with systems. We try out ideas and see if they will work based on what we think a system might do. While the specific skills learned from games do not transfer, our way of looking at situations and inspiration that may give us ideas do. Game play is scientifically proven to modify and change our cortical networks. It was fascinating to see the way in which Martin and the other testers on the team approached this as a testing challenge.
The test subjects had their brains scanned while they were playing games, and the results showed that gaming had an actual impact on the gaming brain. Those who played games frequently showed cooler areas of the brain than those who were not playing games. This shows that gaming optimizes neural networks in many cases. Martin also tok this process focusing on a more specific game, i.e. Mastermind, and what that game did to his brain.
So are games good for testers? Jury is out, but the small sample set certainly seems to indicate that yes, there looks to be evidence that games do indeed help testers and that the culture of tester games, and other games, is indeed healthy. Hmmm, I wonder what my brain looks like on Silent Hill... wait, forget I said that, maybe I really
don't want to know ;).
---
A great first day, so much fun, so much learning, and now it's time to schmooze and have a good time with the participants. See you all tomorrow!