This is going to be a Live Blog, so please forgive the fast typing and likely typos and bad grammar... I'll clean this up and make it pretty later :). Slides to tonights presentation are at http://bit.ly/test-automation
One of the adjustments I'm currently making is that the time and ability to reach Meetups, and where and when I go, is in transition. One of the reasons is that I'm no longer working in SOMA, which is where a large percentage of the Meetups are held (there or the lower financial district at times). Needless to say, with my now being in Palo Alto, it makes more sense to see what's more "local" to Palo Alto in the evenings. Thus, it was with delight that I discovered that there was a San Jose based Selenium Meetup group. Even more to the point, Santiago Suarez Ordoñez is the main speaker, talking about "Beyond Selenium" and what we as testers should have in our toolkits other than just Selenium. Tempting? Absolutely. That's why I'm here :).
Having to relearn the roads and byways to get to destinations in San Jose is always fun, and fortunately, I made it in time. It was fun to re-connect with some familiar faces, hear some of the scuttlebutt about the upcoming Selenium Conference in Boston (and my seeing if I could offer my services in some fashion to get me there… Ashley, seriously, happy to do recordings/podcasts, let's talk).
The talk tonight is "Automated Testing: It's not all about Selenium". For those not familiar with Santi, he's a regular presenter at both the SF and SJ Selenium meetups, he's with SauceLabs, and has been around the block a few times when it comes to both testing and working with Selenium and small open source projects. Oh, and he's also Employee #1 at Sauce Labs (honestly, I never knew that).
Automation or not?
Some questions need to be considered first... what is the value of quality? We want to make sure that the software will meet our customer's expectations. testing allows to reach some level of standards (by some definition). Test Automation comes into play when we want to do a number of steps repetitively or have a need to do them faster. While computers are lousy at making judgment calls, they will be more than happy to do a step 10,000 times, where a sentient human will balk at the repetition, but we are very good at judgment calls. Put together, lots of cool things can be done.
Types of Automation
Test automation happens a a lot of different levels. they are not all created equal, and they do not all use the same methods and approaches. Unit Testing and Front End Interaction are as different as night and day. The problem is, much of the automated testing ideas and ideals often come from the unit testing focus, and we need to think very differently the farther away from atomic units we get. Database and service tests are likewise different, and the tests necessary to integrate them all and look at the system holistically means we need to use all of the skills together for our automation efforts. Scared yet?
Setting Priorities
Many open source projects start ut as ways to learn, share and have fun. Quality is often a later part of the process (not saying it's not important, but early on, it's hazy and that's OK. With maturity and more focused interest, it becomes considerably more important. Also, making a product make money sometimes takes priority over quality early on. Priorities shift, and again, that's OK, but at some point, you do have to care about quality, or someone else will take your idea and do it one better than you. When you have just a few people, very few specialists, and a rapidly changing code base, automation may just not be reasonable. Outsourced projects also may be difficult to get automated, though it can certainly be done, especially if the codebase is settled. When a projects gets to the point that automation makes sense depends a lot on who you are, what you are doing, when you want to do it, and ultimately what you and your audience values. Santi shared some of his experiences making CSSify, a tool that converts XPAST values to CSS.
Architect Your Tests
Very often, we jump in and hack and slay and make our tests to meet some immediate need. Do this enough and you have a test suite. Multiply enough times, and you might think that you have a good handle on and have a high confidence in the value of your automated testing... until something unexpected happens and the whole thing comes crashing down. Been there, done that, and more than once. Ideally all tests would be formulated theoretically, would have a clean and well understood interaction model, and tests would be easy to create. Frankly, I don't know anyone who has reached that level, but taking the time to do some architecture for your testing can pay major dividends later on. Tests mature as the codebase matures, or at least they should.
Good Test Practices
Tests are code. Treat them as such. Set up version control and use it, religiously. Get familiar with every aspect of your testing, all the tools, all the configuration options, know what it takes to make tests, load tests, write tests and run tests. Being able to debug failures is essential. For the test manager, do all you can to help make it as easy as possible for your team to do their work. Reproducibility is vital (tests break, it's going to happen, so prepare for it). Even more frustrating is when a test works 99.95% of the time. Performance is very important. Parallelize where you can, use analogues such as mocks to help you make sure your tests are doing what they should. Make your tests as simple as they can be, as often as humanly possible. Can we do all of this, all the time, every time? Of course not, but thinking about these things and giving them attention helps to diminish the possibility of a complete derail of an automation project.
CI and CD
Continuous Integration and Continuous Delivery... simply put, YES!!! There are tremendous benefits to haviong a system that runs your build process each time a check in is made. With the unit tests being run, the components being rebuilt and the image being created (or the services being set up and restarted, there is a lot that can be done to help flag problem areas when these tests are done regularly, early and often. Putting in multiple changes and then doing a manual build can certainly show errors, but chasing them down can be vexing, as well as time consuming. Having a CI server like Jenkins to auto-run with each check in can help you know very quickly if your changes will be good, or if they will break the build (and if you will have to have your picture on the wall of shame :) ). Continuous Delivery is a special animal, and it's not for everyone, but for those who are willing to commit to the practice, the principal is the same. The more atomic the changes, the easier it is to tell what causes problems, and where. It's key and valuable if done right; it can be tremendously frustrating if it's done wrong. If you have low test coverage, or a low opinion of your test coverage, do NOT do Continuous Delivery.
Some Common Test Techniques
Santi took some time to discuss TDD, and the fact that he's not really a fan of the approach himself. He feels it's valuable if done with abstractions, such as using tools like Page Object Model or using an abstraction language like Cucumber to do inside out development. It's also important to not discount Manual/Exploratory testing. Yes, it's high cost, but it's also high sapience, and that in turn can be incredibly valuable. Doing Exploratory Testing of a high value is hard. Be aware of that fact. Balancing the exploration aspects with the automation initiative is a fine balance, and knowing when to leverage the value of each can be huge.
Fundamentally, the Point is...
Organize your efforts, encourage your developers write tests, focus on edge case coverage and maintenance. Don't silo your QA engineers and have them just write tests, they can't keep up and the tests will suffer. Encourage a partnership where developers write, and QA reviews, maintains and enhances. Let the teams work together, and encourage an even partnership. Santi suggests having the developers write and maintain the tests, while the QA does review, writes and works with tools, and develop and grow the frameworks used. Automation needs good architecture. MOre tests mean more complexity. Complexity demands good architecture and design. These are not trivial things, they take a lot of time and commitment.
Michael Larsen, TESTHEAD
Excellent writeup!! thanks for live blogging, Michael!
ReplyDelete