A first glimpse of Ireland from the air and in the airport. |
---
The first talk I am witnessing is courtesy of Andy Stafford-Clark and it's about "The Internet of Things ...it's Coming!" What is the Internet of Things? It's the interconnection of devices and information details that are not necessarily associated with the devices we typically refer to as Internet enabled devices. We've been focusing on the past three decades talking about computers, and phones and tablets and communication devices. Those are things that we are now well used to seeing, but what about the lights in our house? Our refrigerator? Our home thermostat? The train station display. Many of these devices are using homebrew tools (think Arduino/Raspberry Pi or other devices that can be created to control or set up servers that we can query, modify and update. To some, this is the epitome of nerdy, and for others, it's genuine and valuable information that helps us make decisions about what we can do (hmmm, that sounds familiar :) ). The goal of this initial and primitive "Internet of Things" as it stands today is that it is more of a fun curiosity for front minded nerdy types, but the promise of what it can offer is very compelling. What if we had the ability of actually putting together a clear understanding of how we use energy, as an example? We know we use water, electricity and gas for various purposes, but do we really know when we are using water? What really is causing the largest percentage of usage? Is it family showers? Laundry? Me changing out the water of the fish tanks? garden maintenance? How cool would it be if I could get an hour to hour breakdown of water usage each day and drill down to see various times? That is a perfect adaptation of the Internet of Things, if we choose to set it up and look at it.
Shall we get even more crazy? How about making a mousetrap that actually tells you when it has caught a mouse? Sound weird? well, with an Arduino board and a mechanical mouse trap, it's doable and Andy and his family did it.
All interesting insights, and novel uses, but how will this engage and interest the average everyday user, and when will we see the move away from the "nerdy" crowd towards the everyday people to use them. More to the point, how will we actually be able to test this stuff? Embedded systems knowledge will certainly help, but in many ways, Arduino?Raspbery Pi gives those who want to work in these areas some of the best up front training imaginable. So many of the systems use simple languages like Scratch or JavaScript (well, simple by comparison ;) ), and there will certainly need to be a change of focus. Awareness and familiarity of working with circuit boards helps, but the interfaces are not *that* foreign, thank goodness :). Some interesting issues sill need to be considered, such a how to power all of these devices, and utilizing ways to limit how they are powered (the goal is to make these options available without adding a large additional power load. Much of these devices are being positioned as solutions for helping people save power, so that's a creative challenge to consider. Additionally, there's the question of how to simulate thousands of devices running. How will we do that? How will we test that? These are questions that the next few years will probably start to provide answers for us. No matter how you look at it, this will not be boring ;).
---
Next up, Adapting Automation to the Available Workforce with Colm Harrington. This is a topic that has long interested me and has likewise vexed me in a variety of workplaces. Colm started with an anecdote referring to Einstein and examinations he was giving his students. They were the same questions, and when called on it, he replied "the questions are the same, but the answers have changed". Automation has changed in the last several years as well. The commercial tools have lost a lot of ground. WebDriver is currently king. The needs for automation are extending, and manual testers who exclusively do just manual testing are becoming more and more rare. All of us are doing some level of automation, but not all of us are seasoned programmers and software developers. We need to do a better job of bringing more people in the organization to be able to use and modify automation to be useful for more of the organization. It's a great promise and a wonderful goal, but how do we bring those to the table who do not already have a strong automation background, and more to the point, can we get the software out on time, under or at budget, without embarrassing errors getting into the hands of our customers? Automation is secondary to that, but still very important.
Colm's goal is not to have people get too detailed in the code, which makes sense with the topic. The goal is not to force testers to write automation, but to encourage them to get involved in a meaningful way and at the level they can be comfortable with. Automation can cover a lot of ground, but for me, the biggest issue is the tedious stuff of setup, population and traversal. Automation that addresses that area alone makes me very happy. Yes, it takes time to set a lot of this stuff up, but at least we have the ability of setting everything up from start to finish so I can get more deeply into the corner areas. When we have to set up everything manually, by the time we get to the places that are interesting, we end up exhausted, and less likely to find interesting things. To that end, automation can be done that doesn't require the programmer have an in depth knowledge of all the internals. Instead, we can be engagerd in focusing on the traversal steps we know w do all the time.
One of the biggest challenges that organizations have is that they do not have the ability or the time to take a full team and train them from scratch. However, if the team has taken the time to implement a framework that is easily modified, or that allows individuals on the team to get some quick wins, that will definitely help speed the success of individuals making their way in getting involved with automation. Using a Domain Specific Language or API, many of the steps can be compartmentalized so that the whole team can communicate with the same language. Will the toolsmiths have an advantage? Of course they will, but they will also be able to make a system that all of the participants will be able to leverage (think of Cucumber and the ability to write statements that are well understood by everyone). When the testers write the tests and cover various test cases, the testers knowledge is being used in a way that is most effective, with the programmers able to help fill in the blanks for the testers to be better able to focus on test design and implementation, rather than trying to wrench the tool to work for their benefit.
Some of the best ways to help make it possible for the testers and others to be effective is to keep things simple, consistent and intuitive. Keep data and test scenarios as clear from each other as possible, do the best you can to encourage a common language and linguistics between the code and the test implementations (use labels and methods that make sense by how they are names and what they actually do), and keep tests as atomic as possible (one test from beginning to end, in as few steps as necessary to accomplish the goal. Additionally, a key consideration is to balance the ability of the tests nd methods to be humane over minimal. Refactoring to where the intention is obfuscated is much less helpful as compared to allowing a little more verbosity to give all the participants a clear understanding of what's happening. Also, use the option to create soft assertions, which allow the user to check 50 different fields, notice the one place it fails, an inform at the end of the test rather than stopping cold at the first error it discovers with a hard assert.
Other important considerations: don't make something reusable if all you are doing is adding to its reuselessness. Let the client code shape the API, and don't have the API be set in stone. Dev's and QA need to work together, in either proximity or communication. If you can sit together, sit together, or share screens and talk together in the same time and space, if not the same proximity.
---
Next up is Rikard Edgren and "Trying to Teach Testing Skills & Judgment". This particular topic is near and dear to my heart for a variety of reasons, the most important reason being the fact that my daughter has started the process of learning how to write code with me. More than teach her how to code, though, I want to teach her how to test, and more to the point, teach her how to test effectively and with the ability to learn about what is important, not just doing the busywork associated with general testing. Rickard's model he is describing is a 1-2 year education arrangement, with internships and other opportunities to actually get into real situations. Rickard's approach and philosophy is as follows:
- Motivation is key
- Not for the money
- It's not about us
- Encourage new ideas
- Don't be afraid
Rickard mentions the value of focusing on Tacit Skills and Judgment, including asking good questions, applying critical thinking, understanding what is important to the customers, learning quickly, looking at a variety of perspectives, utilizing effective test strategies, and looking for those opportunities to "catch lightning in a bottle" from time to time (i.e. serendipity) and of course, knowing when good enough actually is ;).
Rickard shred a story where he was working with a programmer who decided to try to be a tester and how, while he could see the problems left and right, the programmer didn't necessarily see the issue, or made assumptions based on how the code was being used. the key was, the tester wanted to see the problems. The programmers often want to get the work finished and of their plates (I totally understand this, and believe me, I make the same excuses when I am the one writing the code). This is also why I find it imperative that I have someone else test my code that I write, and not be afraid to tell me my creation is ugly (or at the very least, could be substantially improved ;) ).
Critical thinking isn't just that we question everything. We need to be discerning in the things we question. Start with "What if..." to get the ball rolling. This will help the tester start thinking creatively and get into unique areas they might not automatically consider. Also, be aware of the biases that enter you purview (and don't ever say you don't have biases, everyone does ;) ).
Everyone thinks differently, and the ability as a teach to be able to explain things in a variety of ways is critical. Likewise, we want to encourage those we are teaching to try a variety of things, even if the attempts are not successful or lead to frustration. we need to step back, regroup and give them a chance to look at what they did well, where they could improve, and how they can get the most out of the experiences. There will be theory and hard topics, and those are important, but always couch the concepts in practical uses. The names are not essential, the use and the understanding of how things work does (well, the names are seriously helpful to make sure that we do what we need to and can communicate effectively, but focus on what is being done more so that what it is called, at least at first. Once they get what is happening, the names will make sense ;) ).
Rikard has a paper that covers this topic. I'll reference it as soon as I get time to get to the link and update this stream of consciousness. Oh, and lookie lookie... green and yellow cards to manage question flow... now where have I seen that before ;)?
---
Next up, the closing keynote for Tuesday with Rob Lambert about "Continuous Delivery and DevOps: Moving from Staged To Pervasive Testing". I've often heard of this mystical world of DevOps, I've even heard of Continuous Deployment and heard rumors of people doing it. We do pretty well where we are, but we don't currently have a full scale Continuous Delivery system in place. Still, there is a sense of wonder and appreciation whenever I hear about this in practice.
Rob spent the first part of the talk discussing what many of us know all too well, the long slog march of staged development, testing and release. Don't get me wrong, I am not a fan of this approach at all (too many years suffering through it), especially because, at the tail end, they would pull in everyone humanly possible to test a release (and ultimately, a condemnation for why software testing is ineffective, slow and boring). Yet ironically, the next project gets run exactly the same way.
This brought the big question to the fore; "why do we keep doing these massive, slow running releases?" When the customer needs change, we need to change with them, and big cumbersome releases don't allow for that. Major releases also require a lot of testing of a lot of code at one time, and that invariably means slow, cumbersome, and most likely not fully covered. Releasing in smaller and more frequent chunks means that less code, has to go out, less overall thrashing takes place, and the feedback loop is much tighter.
How to do that? Rob's Company chose to do the following:
- Adopt Agile
- Prioritize work
- Bring Dev and Ops together (DevOps, get it ;)?)
- Everyone tests. Testing all the time.
- The team needs to become one with the data, and understand what the servers are telling us about our apps and services
They removed testers from the center of the team. Note, they don't remove testing from the center, in fact that's the very switch they made. Testing always happens, and the programmers get into it as well. This goes beyond Test Driven Development. It means that automation is used to verify tests where possible, and a more aggressive approach to canning as many tests as possible, with a progressive march to get more coverage and more tests in place with each story. this is very familiar to what we do on my team. the automated tests are the things we want to run every time we do a build, so we emphasize getting those tests reliable, understandable, and easy to modify if needs be that we need to modify them. Ideally, the idea is to automate as much of the drudgery as we can so that we have fresh eyes and fresh energy to look at actual new features and learning what those new features actually do.
Cycle times vary, and each organization can modify and tweak the cycle times as they choose. If weekly is the shipping schedule you want to use, then your cycle time needs to be somewhere between four and five days. Dog fooding (or pre-production) is a life that we understand very well. It helps us se the real performance and actual workflows and how they are processed, and the good bad and ugly that surrounds them. Rob emphasizes exploratory testing be used along with the focus on automation, and an emphasis on the testing that is most critical. The key to success is a focus on continuous improvement and questioning the effectiveness of what you are doing. there will be political battles, and often, there will be issues with people rather than issues with technology or process. Additionally, everyone knows how to test, but not everyone knows how to test with relevance. Remove the drudgery where you can, so that opening the testers eyes and having fresh energy to tackle real and important problems is emphasized. If "anyone can test", then examine the tests that anyone can do, then ask critically if that testing is providing value.
---
More to come, stay tuned :)!!!
No comments:
Post a Comment