In my ever running tradition of Live Blogging events, I'm going to do my part to try to live blog PNSQC and the observations, updates and other things I see while I am here. Rather than do a tweet-stream that will just run endlessly, I decided a awhile ago that having an imprecise, scattered and lousy first draft level stream of consciousness approach, followed by cleaning up and making a more coherent presentation later, to be a fairly good compromise. So, if you would like to follow along with me, please feel free to just load this page and hit refresh every hour or so :).
---
Let's see, first thing I should mention is the really great conversations I had yesterday with Pete Walen, Ben Simo and Gary Masnica, who joined me for various assorted meals as I came into Portland and got settled for this week. I want to also thank Portland in general for being a simply laid out city with lots of options for food and diversion in a very short distance. It reminds me of San Francisco without the triangular grids :).
We had a breakfast this morning for all of the people that helped review papers and get them whipped together for the proceedings. Seeing as the proceedings book is a 500 page plus collection of the talks, presentations and other details that help make a collection of the conference, and for may an official way to get ideas published and disseminated, this was a great chance to meet the very people that helped make these papers be the quality that they are. We had some fun discussions, one of which was based around the "do not bend" instructions placed on my poster tube. Yes, we figured out ways that we could more effectively damage the tube beyond trying to bend it ;).
---
Right now, we are getting things started, and Matt Heusser will be giving the opening keynote talk. Matt has stated on Twitter that this is the talk of his life. I'm excited to hear it. The goal of Matt's talk is to discuss "The History of the Quality Movement and What Software Should Do About It". Much of software's issues stem from its analogs to physical manufacturing, as Matt brought us all the way back to Taylor and Bethel Steel and the way that they carried "Pig Iron" across the yard, so that they could be more efficient (and save the men's backs). By measuring the way that the men were carrying the steel, they worked through the methods necessary to triple productivity (and making a lot more money). Matt has taken this legendary myth and broken it down to show the inconsistencies of the story for this "one right way" that has been championed ever since. The idea of separating the worker from the work persists, though. Henry Ford and the Hawthorne experiments followed later. The idea was that the people assembling telecommunications switches in the 20's, they examined how changing the environment would improve productivity. In fact, just about any change improved productivity. The cynical response is "you will improve any aspect that you directly measure". The more human reply is "if you let people know that you care of their welfare, they're likely to improve just because they know you care!"
World War II sets the stage for America's ascendency, not because America was so fantastic in how it did stuff, but because we were really the only game in town still standing. Europe was bombed, Japan was destroyed, and Russia was gobbling up its hemisphere and locking out everyone else. This made a unique environment where many parts of the world were looking to rebuild and the U.S. was exporting talent and know how all over the world. One of those guys , as testers know, was W. Edwards Deming. Deming didn't find many people interested in "Quality" in the U.S., as they had customers coming out of their ears. Japan, however, welcomed Deming with open arms, and Deming took a walk through their environments to see what was really happening. By discovering the challenges faced earlier on, the quality of the product will improve. To get there, though, you have to go where the real work happens.
Cute question... if you have a dress that's supposed to be 50% cotton and 50% wool, and you take a top, which is 100% cotton, and a skirt that is 100% wool, and you sew the two together... does that meet the requirement of a 50/50 blend ;)? That may sound silly on the surface, but truth be told, that could be seen as meeting the requirements. It shows that language is vague, and we have to go beyond the ideas of "standards" when we make our definitions, so that common sense has a chance to be applied. The most famous of the examples from Deming of course comes from Toyota. Toyota took Deming's ideas and applied the Japanese concepts of "Kai-zen", or "Continuous Improvement". They took the approach that they wanted to do all they could to improve the just in time delivery of building automobiles.
Womack and Jones come to Japan to see why they (the Japanese Manufacturing movement) are suddenly clobbering American Manufacturing, with "Lean manufacturing" growing out of Toyota's process improvements. From here, ideas like one piece flow, reduced cycle time, and the "Kanban" process (the manufacturing approach, not the current agile methodology) grew out of these experiments and observations. The point is, we looked at the systems that we could standardize, but we took out the human element that made it actually work. Meanwhile, back in the states, we decide to look at International Standards and Six-Sigma. Standards are not bad. They're kind of essential if you want to buy a AA battery and make sure it will work. That's a good thing! Making sure a Duracell or Energizer delivers 1.5 volts per unit, consistently, that matters a lot. Standards help make that happen. Process standards are a bit more fuzzy. Does it make sense, really? Sometimes it does, but often it doesn't.
The idea of Six-Sigma works for manufacturing when it comes to very specific processes, but when we apply it to areas like software defects, and how to predict this amorphous moving product, Six-Sigma doesn't really apply. Very often, those proposing Six-Sigma for software don't have the slightest idea as to why the analogues for manufacturing of physical hardware products don't match up to actual software applications. The ideas of functions, processes and methods seem to make sense, until we actually try to apply it to really creating code and how code actually works (and no, I will not pretend that I understand how code actually works. Many minds far better than me have admitted that even they don't know how it *really* works!). The biggest factor that we need to re-consider (and that seemed to be lost from the Deming discussions with Toyota and others) was the fact that the Japansese companies did not remove the human element from the work. Here in the U.S., we did (or certainly did everything we could to try). To an extent, some stuff can be measured and controlled, but many things cannot be so easily tweaked.
This is all great, but what can we do with all of this? Matt wanted to make sure we went home with some stuff we could use immediately, and one of the ideas he said we could all jump on immediately is the idea of operational definitions. I had to jump in here and say that this reminded me of Elisabeth Hendrickson's recent writing in her Exploratory Testing book on Exploring Requirements (yes, I'm giving Elisabeth a plug here, its my blog, deal with it ;) ). If you don't think this is important, consider the recent Presidential debate we had. Two sides saying each other is lying, while each side believed while heartedly that they were telling the truth. How can both be right? When they are using different definitions, it's very possible. Agreeing on the definitions helps a lot. Cool thing is, it's a lot easier to agree on definitions in software than it is in politics. Thus, it's often important to realize that definitions of Quality suffers from the same problems.
Some other things we can do? Start a book club and discuss these ideas. By getting the principal players together to discuss a book, you can take out politics and divisions, and just focus on the ideas. Even if you can't completely remove the politics, you can minimize it, and focus on the issue, and give everyone a chance to contribute to addressing it. Another thing we can do is start taking our own "Gemba walks". take the time to look at the real issues facing your team or organization. People doing the work need to feel like they can improve it themselves. When they just "follow the process" they have effectively given up. Do what you can to defeat that defeatism. Nice! Good takeaway and great talk. Last minute details, if you have the chance to send a team member to a conference, do what you can to help them have that experience. If they are deeply engaged in the work, they are likely to come home energized and full of ideas. Take the chance to mess with a few of them. Worst case, you may do something that doesn't work. Best case, it just might work :). What we as attendees need to STOP doing is saying "I want permission to do this". Don't ask permission, just do it. If you yield good results, show them. If they don't yield good results, hey, you tried something and learned a little more. Better luck next time. Even that is a better outcome than doing nothing.
---
The program for the conference is all over the map (as well as spread over three floors, so if I didn't get to the talk you were most interested in, my apologies. For the 10:15 AM talk, I thought it would be interesting to check out "An Action Model for Risk" with Jeffrey A. Robinson. This is an excerpt of an 8 hour tutorial, but the question can be distilled to the following... after we train people on all of these quantitative techniques for a year, we then require two years to un-train them from what we've taught them! They follow the procedures to the letter, they can analyze problems, they can analyze risk, but they cannot actually make a decision. The reason? They analyze the risks and then they freak out! Sometimes, too much analysis causes paralysis! At times, we need to just not analyze, we need to just do. We very often come to a conclusion based on a little poking and prodding, not with a deep and comprehensive analysis. Root cause analysis is helpful, but very often, it's the very wrong thing to do at the time the problem occurs. Analysis makes sense later, but it often doesn't make any sense to do it in the heat of an issue. Assigning blame can come later; right now, let's get the problem fixed!
Risk is not the same thing as uncertainty. Sometimes we confuse the two. What does risk really mean? Is it the cost of failure? Is it the loss of time? Is it the loss of money? Is it the loss of prestige or customer "good feeling"? Risk is the cost of acting (or not acting). Uncertainty is a very different animal. It's non-linear, and it's related to probability. If you have a good feeling of probable good or probable bad, uncertainty is low. If you don't know the probability, uncertainty is at its maximum, and that's when analysis is helpful (even necessary!).
Great example, a washer failed at a company, so bad that they couldn't tell what the washer was. The company's mandate was to do a reverse engineering analysis to determine what the part was. Did that make sense? Ridiculous! Why, because with a credit card and a trip to ACE Hardware, they bought a graduation of washer sizes for $20, tried out each of them until they found the one that fit, and returned the rest. An hour of time, a $0.25 part, and they are up and running. Pretty cool example of saying "chuck the risk and try a solution NOW!"
There are several other examples that are a lot less trivial and a lot more potentially expensive. When we have a genuine risk of expense either way, and making a choice is ging to be one kind of expensive, but another will be some unknown but likely expensive option, then it is time to make an analysis, with the idea of the analysis to either reduce the risk, or reduce the uncertainty. Guess what? a Six-Signa checklist is not likely to really help you reach a good conclusion. If the uncertainty is low and the cost is low, just freakin' DO IT.
As the Risk or uncertainty rises, then the need from analysis goes up. When your product has the potential of "DEATH", it's OK to analyze. PLEASE, analyse! Even in these environments, there is the possibility of overdoing it. They can be so conservative as to prevent any action or innovation whatsoever. Let's face it, many of us are not working on nuclear reactor software or the microcode to drive a pacemaker, so let's stop pretending that we are. That doesn't mean we abandon all thoughts of rick or uncertainty, but it does mean we need to address the risks realistically, and our analysis should be apropriate, and sometimes that means it's not warranted at all! If your boat is sinking, root cause analysis is not required at this time. Swimming for your life, or finding something that floats, is!
---
The next talk i decided to check out was from "Professional Passport", or at least that was what the lead off slide portrayed. Initially, I was skeptical; I thought this was going to be a product pitch or product demo, but that wasn't the case. "Cultural Intelligence: From Buzz Word to Biz Mark", presented by Valerie Berset-Price, spoke to the idea that the world's inter-connectivity has changed the way that we communicate with others and the way we interact with others from different cultures. I know that this is true and I have experience with this not so much through work, but through BBST. By imagining working with interconnected teams, what will work look like in the future? Will work be regional, or will it be global? Can we develop a global mindset that is not a "my way or the highway" approach? Is it necessary to share norms to be effective?
The Lewis model is the breakdown between three continuums to help determine how to interact, generally, with a business contact. the Linear Active (blue), the Multi-Active (brown) and the Reactive (yellow) corners of the triangle roughly place the cultures of a variety of countries. The ways that these countries and their business people interact with each other and with people around the world vary, and they are in different places on the continuum. The point? It helps to know where people generally lie on the continuum. Are they transactional or relationship oriented? Is their culture egalitarian or do they have a stratified hierarchy? Do they celebrate the individual or the group? Are they Driven or are they Live For Today?
These differences are real, they are cultural and they matter. if you work with people who are from a different part of the continuum, it's important to realize that people react differently based on these cultural factors. The U.S.A., political rhetoric aside, is actually extremely egalitarian and individualistic. Many people come from countries where groupist approaches are more normal, where the hierarchy is well established, where the idea of a highly educated individual reporting to a person with lesser scholastic achievement is absurd. How do you deal with those dynamics? What does it mean to actually live in these environments? How do we handle things like individual praise vs group involvement?
What happens when you go to a country and you have to drive on the opposite side of the road (and the opposite side of the car)? It messes with your sense of reality, and when you do so, you turn off the radio, you turn off the phone, you don't talk, you just drive! It shakes up your system, and you must learn how top deal with it, or risk crashing. Other cultures are the same way. The problem is not as dire as when we drive in another country, but our reactions should be the same. We need to turn off our auto pilot, we need to stop expecting people to behave the way we do. We need to learn a little "When in Rome".
This doesn't mean shut off our morals, our compass, or our common sense. It does mean understanding what is important to another person from another culture. Don't think that just because someone speaks fluent English, that they get everything you say. I know full well just how little my Japanese gets me beyond "where's the train station, where's food, where can I buy Manga?" (I'm kidding a little; I studied Spanish and German when i was a kid, and I still get some of the gist of what people are saying in those languages, and that alone has made it very real to me how challenging it can be to communicate with other cultures).
---
During the lunch hour, Matt Heusser and I gave a talk about Testing Challenges, specifically how to use testing challenges to mentor prospective testers and help see areas in which they could potentially learn and develop. For those who have read my blog in the past, you are somewhat familiar with Miagi-do, which is the grassroots school of software testing that we are both affiliated with, and which Matt founded a few years ago.
This is the first time that we have done this kind of a talk, and peeled back some of the veil of secrecy that surrounds the school. Well, OK, it's not really secret, we just don't advertise it. We walked the participants through a few testing challenges and discussed how those challenges could be used with helping mentor other testers.
Matt and I took turns discussing different challenges, ranging from a salt shaker to getting my cell phone to play music. For those who want the answers to these challenges, well, you'll have to ask someone who was there ;). Seriously, though, it was neat to have the opportunity to share some of these challenges in a public setting and encourage others to use our model and approach with themselves or their teams.
---
For the afternoon, I am shifting my focus from speaker to moderator. Currently, we are listening to the Automation track, and I'm hearing a talk from Shilpa Ranganathan and Julio Lins (of Microsoft) about some of the challenges they face with regards to Testing on Multiple Platforms. Both Shilpa and Julio are from Microsoft's Lync team. Leaving the server aspect of the product out of the equation, there are many possible platforms that the software is meant to run on: Windows client (.net, Silverlight), web clients, and mobile clients for iPhone and Windows 7 phone (hmmm, no Android?). It would seem like a no brainer that making tests that could be reused would be an obvious win.
That makes sense, of course, but what we would like to do vs. what actually happens are often very different. It as interesting to see the ways in which they structured their testing and design so that they could reuse as much as possible. As their talk makes clear, it's not possible to completely reuse their code on all platforms, but for as many cases as possible, it makes sense. This gels well with my personal efforts to "write once, test anywhere" approach, and while it's not entirely possible to do this in all circumstances, it certainly makes it easier to do so where you can. When in doubt, smaller test cases are usually more portable than all encompassing cases. Also, making efforts to stabilize platforms made more sense than to try to just copy test code between environments.
---
For my second moderated talk, I am listening to Vijay Upadya, also from Microsoft, discuss ideas behind Dynamic Instrumentation, and how it can be used with State Based Testing. The idea is that there is a specific state that needs to be met and established for a test to be valid. Often, we cannot fully control those state transitions, or catch them at the critical moment that they happen, because ultimately they happen in the background, and rarely on the timeline that we have set. Dynamic instrumentation uses the idea that, through code and binary manipulation, we can set up these states as we need them to occur. Sounds tricky? Aparently, yes, it is.
Vijay talked about this approach with the project he was working with while Microsoft was developing SkyDrive. By using a library called Detours, they were able to rewrite memory to create specific state conditions as needed, and most importantly, in a repeatable manner. I'll have to admit, part of me is intrigued by this, part of me is disappointed that this is a Microsoft specific technology (nothing against Microsoft, but now I'm wondering is there's any way to do this in the Ruby space (will have to investigate later this evening when I get some free time :) ).
---
With more of my involvement with mobile devices, I found the idea of a more mobile discussion interesting, so Billy Landowsky's "Shake 'n' Send" seems interesting. Think about this... how do we get reasonable user feedback from our mobile users? How can we get real and effective feedback about how these apps are developed? The screen size limitations, single app focus and the very "walled garden" realities of many of these applications makes for a challenging environment to develop for, much less test (this was brought home to me this weekend specifically as we did a Weekend Testing event specific to mobile app testing and the challenges we face.
In desktop applications, there are "Collect Logs" options, or a customer feedback program that occasionally polls the server and sends updates. On a mobile carrier, that kind of communication model is not as desirable (not to mention, for some, impossible, as network connectivity is not always guaranteed).
Sure, there's a lot of disadvantages... but what about the pluses that could be leveraged? The solution developed for the Windows 7 Mobile devices is called Shake 'n' Send. The accelerometer in the device can detect that activity to send feedback to the MS servers. It can also queue up email so that, when a connection is available, the OS takes care of it so that, when there is a connection, the messages can be sent.
By shaking the phone, the feedback can be sent that ranges from "Happy" to "Bug" with trace info and diagnostic data included. The bigest benefit to the user; they never have to leave the application. Gotta' admit, this sounds like an intriguing idea. I'd be curious to see how this develops over time.
---
The last official track talk of the day is one I have wanted to see for a long time. When I first came to PNSQC in 2012, I was moderating another track, so couldn't see it. Last year, I was laid up with a broken leg and rehabilitating, and couldn't attend PNSQC at all. Thus, I made sure to keep my schedule clear so that I could actually watch these high school students from Lincoln High School in Portland discuss "Engineering Quality in the Robotic World", and chart their discoveries and growth over the past several years (their club has been active since 2005). Coming from the First Lego League and their basic level of software development to a more advanced robotic development and programming has been a fascinating journey.
Each of the teenagers (Ida Chow, MacLean Freed, Vicki Niu, Ethan Takla and Jeffry Wang, with their sponsor Kingsum Chow) showed how they were able to develop their robots, attempt and overcome a variety of challenges, and ultimately share the lessons learned from their discoveries and competitions. I had a chance to interview several of the members of this team during the Poster Paper presentation in 2010, and it's cool to see how they have developed their skills since that interview two years ago. Seriously inspiring, and a bit humbling, too. Props to them for their presentation skills as well, as they have become quite good. Now, to their presentation... One of the cool things to see that they overcame were limitations in design, as well as how they were able to make changes from plastic lego blocks to machined metal, finer motor controls and a more powerful programming platform (using Robotic C).
Hearing this topic is neat in and of itself. Hearing five high school kids talk to this level and with this level of confidence kicks it up to just short of amazing. Seriously, these kids are impressive, and barring some catastrophic derailing of focus or attention, I think they will have a magnificent future in this industry :).
---
We had a great evening reception and I had many opportunities to discuss my paper's points and my own topic of "Get the Balance Right: Acceptance Test Driven Development, GUI Automation and Exploratory Testing". Each group I discussed it with helped me to se an additional facet I had not directly considered, so much so that I think there's a lot more potential life in these ideas beyond just a paper and a presentation, so stay tuned for more on that front. With that, it's time for dinner, so I'm going to see out one of many of Portland's famous food trucks and call it a night. See you all tomorrow!
No comments:
Post a Comment