Good morning everyone, STP-CON is once again underway. Rich is onstage now and getting today's program in order. There's a full and action packed day in store, plus something interesting later today with Matt Heusser called Werewolf. It's limited to twenty people, and I'm going to help support the event, not specifically participate. For those at the conference that want to play Werewolf, come on out and participate at 5:30 p.m.
I want to give a shout out to the folks at the Sheraton that have been fantastic dealing with logistics, food, drink breaks, etc. Seriously, they have been fantastic. Also, I want to thank everyone who has hung around for the amazing conversations that go on each night afterwards. The plus side, I learn so many interesting things from various perspectives. The down side is that I am going to bed way too late this week ;).
We're about to get ready for the second keynote, so I'll be back shortly.
Ever had a conversation with someone who says "I don't care where we go to eat" but when you actually make a suggestion, they always say "no"? If that annoys you, then the keynote speaker might actually ring a chord with you. In "Get What You Want With What You’ve Got", humorist Christine Cashen is in the middle of describing those people that always complain about the world around them, and how nothing works for them. How will they deal with situations that require doing more with less?
Christine is describing the personalities of people based on single words. The who people, the what people, the why people, the how people. If you want to get what you want, you have to realize that each of these people has their own language and their own way of dealing with things. If you want to get what you want from these people, you have to know what they need to hear and how that works for them.
One of the things I will already say about Christine is that she is directly engaging with individuals in the audience, and engaging them. I had to laugh when she did the partner exercise with the fist, and my immediate reaction was "oh, this is so Wood Badge!" However, it was fun to see the various reactions of the people in the audience.
One of the great tools that I use often, and I've found it to be greatly helpful comes from a phrase that James Bach said in an interview a couple of years back... "I have strong convictions, but they are lightly held." What does he mean by that? It means that he genuinely believes or has come to certain conclusions, and he will battle and fight for them, but if new information comes to light, we can modify our understanding and see things differently. That's an extremely valuable tool.
With humor, a bit of silliness, and a lot of heart, this was honestly a way better talk than I was expecting. By the way, for those who want a taste of what Christine is like, check this out:
-----
I have wanted to participate in Henrik Andersson's talk "Now, what's Your Plan?" several times, but I have either been speaking or otherwise engaged each time he's given it. When I saw he was doing it as a two hour block, I knew I had to get in on it. Thus, much of today will be focused on Henrik's presentation (and thanks to Leonidas Hepas for helping facilitate the session). I think this will be fun :).
First thing we all did was write down a working definition of "context". Simple right?
Hmmm... maybe not ;). Context is deceptively easy to define, but it's not as easy to come to an agreement on what it actually means. Henrik of course was not content to just have people consider a definition, we needed to internalize and understand it. When he pulled out the spider robots, I smiled and laughed... and then told my group that I would need to recuse myself from the exercise, since this exercise is the contents of the "What is Context?" module that is being used in the SummerQAmp curriculum. Still, it's cool to see how the groups are addressing the various challenges.
Without spoiling the exercise (some of you may want to do it later if Henrik offers this again, and I recommend you go through it if you get the chance), it's interesting to see how many different scenarios and contexts can be created for what is essentially the same device.
As each team has gone through each round, changes in the requirements and the mission are introduced. Each change requires a rethinking and a re-evaluation of what is needed, and what is appropriate. This is where "context" begins to be internalized, an the ability to pivot and make changes to our testing approach based on new information. It's stressful, it's maddening, and it really shows that not only is context a consideration for different projects, but it is also appropriate to consider there can be different contexts for the project you are actually working on, and the ability to change one's mind, ideas and goals mid-stream is a valuable skill to have.
What was interesting was to come back and see, based on this experience, whether or not the team's ideas of context had changed. We can look at context as to the way we test. We can look at context as to the use of the product. We can look at context base on the people that will use it. Several of the teams had come back to their initial definitions and decided to modify them. I could be a smart aleck right now and say that this is the moment that everyone comes out and says "It depends" ;).
So... what did our instructors/facilitators use to define context? See for yourself:
------
Lunch was good, and and we are now into our afternoon Keynote. Matt Johnston from uTest is talking now about "the New Lifecycle for Successful Mobile Apps". We talk a lot about tools, processes and other details about work and what we do. Matt started the talk with discussing about Companies vs. users. Back in the day, companies provided product to users. Today, because of the open and wide availability of applications in the app store, users now drive the conversation more than ever. A key think to realize is that testing is a means to an end. It's there to "provide information to our stakeholders so that they can make good decisions" (drink!).
Mobile is just getting started. We are seeing a transition away from desktop and laptops to mobile (all sources; tablets, phones, etc.). Mobile is poised to eclipse the number of desktop and laptop machines in the next three to five years. Mobile apps are also judged muh more harshly than their desktop or web equivalents were judged at their point in the product lifecycle. The court of public opinion is what really matters. App store ratings and social media will make or break an app, and it will do so in record time today.
Much of the testing approach we have used over the years has come from an outside in perspective. Mobile is requiring that our testing priorities invert, and that we focus on the inside-out approach, especially with mobile. What the user sees and feels trumps what the product actually does, fair or not.
The tools available to mobile developers and mobile testers is expanding, and the former paucity of tools is being addressed. More and more opportunities are available to check and automate mobile apps. Analytics is growing to show us what the users of mobile devices are actually doing and see how and there they are voting with their feet (or their finger swipes, in this case ;) ).
A case study presented was for USA today, a company that supports Printed Paper, a website and 14 native mobile apps. While it's a very interesting model and great benefit to its users, it's a serious challenge to test. They can honestly say that they have more uniques and more pageviews on mobile than on the web. that means that their mobile testing strategy really matters, and they have to test not just domestically, but worldwide. The ability to adapt their initiative and efforts is critical. Even with this, they are a company that has earned regularly a 4.5 star app store rating for all of their apps.
If your head is spinning from some of that, you are not alone. Mobile isn't just a nice to have for many companies, it's now an essential component to their primary revenue streams.
-----
One of the unfortunate things that can happen with conferences is when a presenter has to drop out of a conference at the last minute. It happened to me for PNSQC 2011 because of my broken leg, and it happened to one of the presenters scheduled today. In his place, Mark Tomlinson stepped in to discuss Performance Measurements and Metrics. The first thing that he demonstrate was the fact that we can measure a lot of stuff, and we can chew through a lot of data, but what that data actually represents, and where they fit in with other values, is the real art form and the place that we really want to place our efforts.
Part of the challenge we face when we measure performance is "what do we actually think we are measuring?" When a CPU is "pegged", i.e. showing 100% utilization, can we say for sure what that represents? In previous decades, we were more sure about what that 100% meant. Today, we're not so sure. Part of the challenge is to get clear the question "What is a processor?" We don't really deal with a single CPU any longer. we have multiple cores an ach core can create child virtualization instantiations. Where does one CPU reality end and where does another begin? See, not so easy, but not impossible to get a handle on.
Disk space is another beloved source of performance metric data. parking the data that you need in the place you need it in the optimal alignment is a big deal for certain apps. the speed of access and the feel of the system response to present data can be heavily influenced by how the bits are placed in the parking lot. Breaking up the data to find spot can be tremendously expensive (this is why defragmenting drives regularly can provide such a tremendous performance boost). Different types of servers have a different way they handle I/O (Apps, DB, Cacheing, etc.).
RAM (Memory) is another much treasured and frequently coveted performance metric. Sometimes it gets little thought, but when you find yourself using a lot of it, it can really mess up your performance if you run out of it. Like disk, if you reach 100% on RAM, that's it (well, there's page file, but really, you don't want to consider that as being any real benefit. This is called a swapping condition, and yeah, it sucks).
The area that I remember doing the most significant metric gathering on would be in the Network sphere. Networking is probably the most variable performance aspect, because now we're not just dealing with items inside of one machine. What another machine on the network does can greatly affect wat my own machine's network performance is. Being able to monitor and keep track of what is happening on the network, including re-transmission, loss, throttling, etc. can be very important.
Some new metrics we are getting to be more interested in are:
What does it mean to push, or "jam" a story upstream? Jerry Welch is introducing an idea that, frankly, I never knew had a word before. His talk is dedicated to "anadromous testing", or more colloquially "upstream testing". How can we migrate testing upstream? We talk about the idea that "we should get into testing earlier". Nice to say, but how do you actually do that?!
Christine is describing the personalities of people based on single words. The who people, the what people, the why people, the how people. If you want to get what you want, you have to realize that each of these people has their own language and their own way of dealing with things. If you want to get what you want from these people, you have to know what they need to hear and how that works for them.
One of the things I will already say about Christine is that she is directly engaging with individuals in the audience, and engaging them. I had to laugh when she did the partner exercise with the fist, and my immediate reaction was "oh, this is so Wood Badge!" However, it was fun to see the various reactions of the people in the audience.
One of the great tools that I use often, and I've found it to be greatly helpful comes from a phrase that James Bach said in an interview a couple of years back... "I have strong convictions, but they are lightly held." What does he mean by that? It means that he genuinely believes or has come to certain conclusions, and he will battle and fight for them, but if new information comes to light, we can modify our understanding and see things differently. That's an extremely valuable tool.
With humor, a bit of silliness, and a lot of heart, this was honestly a way better talk than I was expecting. By the way, for those who want a taste of what Christine is like, check this out:
-----
I have wanted to participate in Henrik Andersson's talk "Now, what's Your Plan?" several times, but I have either been speaking or otherwise engaged each time he's given it. When I saw he was doing it as a two hour block, I knew I had to get in on it. Thus, much of today will be focused on Henrik's presentation (and thanks to Leonidas Hepas for helping facilitate the session). I think this will be fun :).
First thing we all did was write down a working definition of "context". Simple right?
Hmmm... maybe not ;). Context is deceptively easy to define, but it's not as easy to come to an agreement on what it actually means. Henrik of course was not content to just have people consider a definition, we needed to internalize and understand it. When he pulled out the spider robots, I smiled and laughed... and then told my group that I would need to recuse myself from the exercise, since this exercise is the contents of the "What is Context?" module that is being used in the SummerQAmp curriculum. Still, it's cool to see how the groups are addressing the various challenges.
Without spoiling the exercise (some of you may want to do it later if Henrik offers this again, and I recommend you go through it if you get the chance), it's interesting to see how many different scenarios and contexts can be created for what is essentially the same device.
As each team has gone through each round, changes in the requirements and the mission are introduced. Each change requires a rethinking and a re-evaluation of what is needed, and what is appropriate. This is where "context" begins to be internalized, an the ability to pivot and make changes to our testing approach based on new information. It's stressful, it's maddening, and it really shows that not only is context a consideration for different projects, but it is also appropriate to consider there can be different contexts for the project you are actually working on, and the ability to change one's mind, ideas and goals mid-stream is a valuable skill to have.
What was interesting was to come back and see, based on this experience, whether or not the team's ideas of context had changed. We can look at context as to the way we test. We can look at context as to the use of the product. We can look at context base on the people that will use it. Several of the teams had come back to their initial definitions and decided to modify them. I could be a smart aleck right now and say that this is the moment that everyone comes out and says "It depends" ;).
So... what did our instructors/facilitators use to define context? See for yourself:
The interesting thing is that, all of the definitions differ, but none of them contradict each each other. Here is why "best practices" are not helpful, because the context changes, not just between projects but often within a project. Hearing the different team's discuss their own experiences as to how they match up with the exercise, many teams see that context is a genuine struggle, even in groups that profess that they understand context. there is a tremendous variety in markets, needs, timelines and issues. Learning how to undertand those elements, and how to address them as they come up, will go a long way in helping to drive what you test, how you test, when you test, and what prioritization you place on what you test.
Again, a deceptively simple, yet complex issue, and a seriously fun session. Thanks to Henrik and Leo for their time and energy to make it possible.
------
Lunch was good, and and we are now into our afternoon Keynote. Matt Johnston from uTest is talking now about "the New Lifecycle for Successful Mobile Apps". We talk a lot about tools, processes and other details about work and what we do. Matt started the talk with discussing about Companies vs. users. Back in the day, companies provided product to users. Today, because of the open and wide availability of applications in the app store, users now drive the conversation more than ever. A key think to realize is that testing is a means to an end. It's there to "provide information to our stakeholders so that they can make good decisions" (drink!).
Mobile is just getting started. We are seeing a transition away from desktop and laptops to mobile (all sources; tablets, phones, etc.). Mobile is poised to eclipse the number of desktop and laptop machines in the next three to five years. Mobile apps are also judged muh more harshly than their desktop or web equivalents were judged at their point in the product lifecycle. The court of public opinion is what really matters. App store ratings and social media will make or break an app, and it will do so in record time today.
Much of the testing approach we have used over the years has come from an outside in perspective. Mobile is requiring that our testing priorities invert, and that we focus on the inside-out approach, especially with mobile. What the user sees and feels trumps what the product actually does, fair or not.
The tools available to mobile developers and mobile testers is expanding, and the former paucity of tools is being addressed. More and more opportunities are available to check and automate mobile apps. Analytics is growing to show us what the users of mobile devices are actually doing and see how and there they are voting with their feet (or their finger swipes, in this case ;) ).
A case study presented was for USA today, a company that supports Printed Paper, a website and 14 native mobile apps. While it's a very interesting model and great benefit to its users, it's a serious challenge to test. They can honestly say that they have more uniques and more pageviews on mobile than on the web. that means that their mobile testing strategy really matters, and they have to test not just domestically, but worldwide. The ability to adapt their initiative and efforts is critical. Even with this, they are a company that has earned regularly a 4.5 star app store rating for all of their apps.
If your head is spinning from some of that, you are not alone. Mobile isn't just a nice to have for many companies, it's now an essential component to their primary revenue streams.
-----
One of the unfortunate things that can happen with conferences is when a presenter has to drop out of a conference at the last minute. It happened to me for PNSQC 2011 because of my broken leg, and it happened to one of the presenters scheduled today. In his place, Mark Tomlinson stepped in to discuss Performance Measurements and Metrics. The first thing that he demonstrate was the fact that we can measure a lot of stuff, and we can chew through a lot of data, but what that data actually represents, and where they fit in with other values, is the real art form and the place that we really want to place our efforts.
Part of the challenge we face when we measure performance is "what do we actually think we are measuring?" When a CPU is "pegged", i.e. showing 100% utilization, can we say for sure what that represents? In previous decades, we were more sure about what that 100% meant. Today, we're not so sure. Part of the challenge is to get clear the question "What is a processor?" We don't really deal with a single CPU any longer. we have multiple cores an ach core can create child virtualization instantiations. Where does one CPU reality end and where does another begin? See, not so easy, but not impossible to get a handle on.
Disk space is another beloved source of performance metric data. parking the data that you need in the place you need it in the optimal alignment is a big deal for certain apps. the speed of access and the feel of the system response to present data can be heavily influenced by how the bits are placed in the parking lot. Breaking up the data to find spot can be tremendously expensive (this is why defragmenting drives regularly can provide such a tremendous performance boost). Different types of servers have a different way they handle I/O (Apps, DB, Cacheing, etc.).
RAM (Memory) is another much treasured and frequently coveted performance metric. Sometimes it gets little thought, but when you find yourself using a lot of it, it can really mess up your performance if you run out of it. Like disk, if you reach 100% on RAM, that's it (well, there's page file, but really, you don't want to consider that as being any real benefit. This is called a swapping condition, and yeah, it sucks).
The area that I remember doing the most significant metric gathering on would be in the Network sphere. Networking is probably the most variable performance aspect, because now we're not just dealing with items inside of one machine. What another machine on the network does can greatly affect wat my own machine's network performance is. Being able to monitor and keep track of what is happening on the network, including re-transmission, loss, throttling, etc. can be very important.
Some new metrics we are getting to be more interested in are:
- Battery Power (for mobile)
- Watts/hr (efficiency of power consumption in a data center, i.e. "green power")
- Cooling in a data center
- Cloud metrics (spun up computer unit costs / hour)
- Cloud storage bytes (Dropbox, Cloud Drive, etc.)
Other measures that are being seen as interesting to a performance evaluation of systems are:
Correlative graphing is also used to help us see what is going on with one or more measurements. A knee in the curve may be interesting, but wouldn't it be more interesting to see what other values might be contributing to it?
This fits into the first talk that Mark gave yesterday, and here's where the value of the first talk becomes very apparent. Much of the data we collect, if we just look at the values by themselves, don't really tell us much. Combining values and measuring them together gives us a clearer story of what is happening. Numbers are cool, but again, testers need to provide information that can drive decisions (drink!). Putting our graphs together in a meaningful way will greatly help with that process.
-----
- Time (end user response time, service response time, transaction response time)
- Usage/Load (number of connections, number of active threads, number of users, etc.)
- Multi-threading (number of threads, maximum threads, thread state, roles, time to get threads)
- Queuing (logic, number of requests, processing time)
- Asynchronous Transfer (disparate start/end, total events, latency)
At some point with all of this, you end up getting graphical representations of what you are seeing, and the most bandied about graph is the "knee in the curve". Everyone wants to know where the knee in the curve happens. Regardless of the value, the knee in the curve is the one hat people care about (second most important is where things go completely haywire and we max out, but the knee is the real interesting value.. by some definition of interesting ;) ).
Correlative graphing is also used to help us see what is going on with one or more measurements. A knee in the curve may be interesting, but wouldn't it be more interesting to see what other values might be contributing to it?
This fits into the first talk that Mark gave yesterday, and here's where the value of the first talk becomes very apparent. Much of the data we collect, if we just look at the values by themselves, don't really tell us much. Combining values and measuring them together gives us a clearer story of what is happening. Numbers are cool, but again, testers need to provide information that can drive decisions (drink!). Putting our graphs together in a meaningful way will greatly help with that process.
-----
What does it mean to push, or "jam" a story upstream? Jerry Welch is introducing an idea that, frankly, I never knew had a word before. His talk is dedicated to "anadromous testing", or more colloquially "upstream testing". How can we migrate testing upstream? We talk about the idea that "we should get into testing earlier". Nice to say, but how do you actually do that?!
Let's think about the SDLC .We're all familiar with that, right? Sure. Have you heard of the STLC? The Software Testing Life Cycle? The idea is that just as there is a software development lifecycle, there is also a test lifecycle that works similar to, and in many ways synchronously with the SDLC.
One of the key ways that a test team can make movement upstream is to make sure that their team understands what they are responsible for and what they deliver. Make an emphasis on training, and plan to have your development team interact with your test team, and do so in a fun way (make play dates with developers and testers. Sounds weird, but I like the spirit behind the idea a lot :) ).
Testers have to make the commitment to transition over to being an extension of the development process. testers need to learn more about what is necessary to get a product covered. If a company hires full stack developers, likewise, full stack testers might be a valuable goal. While that doesn't mean that the testers need to become full stack developers themselves, they need to understand all of the moving parts that actually come into play. Without that knowledge, testing is much less effective. It's not going to happen overnight, but geting that skill level up for the testers will help them be able to get farther upstream with the testing process.
Along with learning about what to test, make a commitment to focus on estimating what you can really get done in a given sprint or cycle. Stop saying that you cannot get your testing done in a given sprint. Instead, get a real handle on what you can get done in a given test sprint.
Every organization's STLC is going to be a bit different. It will be based on the talent the team has and on the talent that they can develop. Just like a salmon swimming upstream, your team has to develop strength. More to the point, they need to be able to show what they know. Effective Bragging is a point that is emphasized, and if you have a wiki, use it to show what you have learned, what milestones you have met, etc. Another aspect is Battle Elegance, which addresses such areas as people vs. projects, customers or team members, and developing goals to keep the teting team focused and moving forward (or swimming upstream).
I'm not sure I'm totally on board with this idea, but I admire its goals, and I think it's one of the few ways I've seen articulated to actually get thinking about this process. We all want to move upstream, or more to the point, we wish we were involved earlier. The metaphor of "swimming upstream" works for me. It's muscular, demanding, and exhausting, but you will get stronger if you keep swimming. Of course, I'm not so fond of where the metaphor ends. Think about it what happens to salmon when they finally get to the spawning grounds. They reproduce, then they become bird food. I like the reproduce idea. The bird food idea, not so much ;). I guess our real challenge is finding out how we can sync up so that we don't die at the end of the journey.
-----
The last "official" activity for Wednesday (and I use that term loosely ;) ) is a group of testers getting together to play a game called "Werewolf". Think of this as "networking with a twist. The participants are all at a large perimeter of tables, and the goal is to determine who are villagers and who is the werewolf. The point of this game is to observe others around the table, and based on conversations, clues and details, see how quickly the werewolf can be identified, and also not make false identifications. this has been fun in the sense that everyone is both laughing and trying to see i they can get it right the first time without false identification. After the round of sessions today, a little levity is going a long way.
No comments:
Post a Comment