Hello and welcome back to the TESTHEAD live blog. It's been a few weeks, primarily because I went into overdrive in late March and April around 30 Days of Testability and STP Con. Still, I was invited to come out to see Browserstack's first meetup in San Francisco. I was intrigued, specifically because we use Browserstack at my company (and then didn't but now do again, a post will be on that later).
We've kicked off the night listening to Browserstack CEO Ritesh Arora, who has been sharing the timeline of the growth and evolution of Browserstack. Again, I'm not necessarily here to talk about Browserstack as a product (I certainly can in future posts if interested) but I do appreciate the fact that they have developed one of the most responsive device farms anywhere. The mobile offering they have is pretty extensive but most important it is quick as far as spinning up multiple devices in the cloud goes. Their overall vision is that they want to develop the de-facto testing infrastructure of the Internet. That's a pretty sweet goal if I do say so myself. As a remote engineer, the challenges of gathering enough devices together to do meaningful testing is large. Using a tool like Browserstack, that helps me extend out and test a variety of products I don't actually have.
Dang it, I said I wasn't going to make this all about Browserstack. Oh well. For those curious, yeah, it's a tool I enjoy using. It's not perfect but it definitely made clear how much I liked it when I had to do without it for several months because of a change in focus and perspectives shifted us to different products. Suffice it to say, we are back and I'm happy about that.
OK, enough marketing spiel. Lanette Creamer flew down from Seattle to talk at this event, which, hey, that's pretty cool. Lanette and I have traded war stories for a few years now (the better part of a decade, really, but who's counting ;) ). Lanette is here to talk about "Small Projects" and how they can actually be more difficult than larger projects. The common challenge is that smaller projects will go faster, or at least that is the belief. However, there are times where smaller projects don't really end up being faster. In many cases, there are challenges that will cause users to not be able to really do what they hope to do. That small project can blow up very quickly and become a much bigger challenge than was expected.
When people talk about projects that are small, what they are actually aiming for is "something simple that we can do quickly". However, the problem often arises that scope rises in the allotted time frame. Thus, the trick is to make sure that the systems are in place to be able to keep everything in place. Small projects are risky projects, so it's vital that we trust our team members. We also have to realize that there is no way we will be able to do everything we would like to do. Perhaps swap real-time paring with your developers over having to write bug reports and waiting for a response. Get away from the time killers and project killers; avoid politics, get away from micromanaging and try to not be so beholden to perpetual status updates. Be aware that there is a freedom that can be had with the right attitude. You may not be able to do everything but it's better than doing nothing. Start there :).
---
The next talk was presented by Priyanka Halder. She currently heads the Quality Engineering team in GoodRx. Priyanka's talk is "Taming the dragon: Going from no QA to a fully integrated QA". I can relate to this from a variety of situations in my testing career, where I've either had no QA or we have had to retool the testing that was done previously. I've struggled in a few of these environments and have at times had to make new initiatives just to make headway. Some times I've had lots of automation but it had limited focus or it worked for part of the product and other areas needed coverage but it didn't exist yet. Very often, we had experiences where what worked in one part of the product didn't work in another area. As for the premise of Priyanka's talk, I also know what it feels like to be the first tester in a team that didn't have one previously. That's a special challenge but a real one and it requires a certain kind of finesse. It's not just that we have to test, we also have to make a case as to why we are worthwhile to have on the team.
Very often, the problem that we have to deal with up-front is the issue of testability. If we are coming into a new QA environment with a lot of manual testing, if the goal is to automate (and even if it isn't) placing an emphasis on testability early can reap large dividends.
---
The last talk for the night is being delivered by Brian Lucas of Optimizely. He is responsible for Build, Test, and Release engineering processes. His talk is about "Avoiding Continuous Disintegration at Optimizely". One of the keys to being successful in developing software are certain common traits: speed, engagement, and iterative development can help keep quality high. However, as the product becomes more complex, if the time and effort are not put in to be able to keep that process rolling, things can fall apart quickly. One of the approaches Brian suggests is the idea of releasing faster and shipping quicker comes down to working in smaller increments, shipping more frequently and working through experiments. Components usually work well in isolation but they struggle when they have to work with each other. If your team is playing Dev and QA ping pong, stop it! BY working more openly with QA and doing testing earlier, it is possible to get more coverage even in smaller increments of time. In short, your testers are not going to be able to do all of it and groups that don't prioritize for this will lose that opportunity.
Optimizely crowdsources their QA to their entire engineering team so that all of the testing efforts don't just fall on the heads of the testers. More to the point, rather than to have one person testing a hundred commits, having the developers test their own commits speeds up the process and lessens the load on the testers so that they can focus on more pressing areas.
Continuous Integration, Continuous Delivery, and Continuous Deployment are areas that will take time and care to set up but by taking the time to do it it makes it easier down the line. Ultimately the goal is to improve the feedback cycles and build out the infrastructure in a way that allows for repeatability, testability, and implement-ability in many places.
---
On the whole, a nice first outing Browserstack. Thanks for having us. Also, thanks for the shirt and water bottle. I should get a few laughs at tomorrow's standup when I wear it (hey wait, isn't that... yes, yes, it is ;) ).
We've kicked off the night listening to Browserstack CEO Ritesh Arora, who has been sharing the timeline of the growth and evolution of Browserstack. Again, I'm not necessarily here to talk about Browserstack as a product (I certainly can in future posts if interested) but I do appreciate the fact that they have developed one of the most responsive device farms anywhere. The mobile offering they have is pretty extensive but most important it is quick as far as spinning up multiple devices in the cloud goes. Their overall vision is that they want to develop the de-facto testing infrastructure of the Internet. That's a pretty sweet goal if I do say so myself. As a remote engineer, the challenges of gathering enough devices together to do meaningful testing is large. Using a tool like Browserstack, that helps me extend out and test a variety of products I don't actually have.
Dang it, I said I wasn't going to make this all about Browserstack. Oh well. For those curious, yeah, it's a tool I enjoy using. It's not perfect but it definitely made clear how much I liked it when I had to do without it for several months because of a change in focus and perspectives shifted us to different products. Suffice it to say, we are back and I'm happy about that.
OK, enough marketing spiel. Lanette Creamer flew down from Seattle to talk at this event, which, hey, that's pretty cool. Lanette and I have traded war stories for a few years now (the better part of a decade, really, but who's counting ;) ). Lanette is here to talk about "Small Projects" and how they can actually be more difficult than larger projects. The common challenge is that smaller projects will go faster, or at least that is the belief. However, there are times where smaller projects don't really end up being faster. In many cases, there are challenges that will cause users to not be able to really do what they hope to do. That small project can blow up very quickly and become a much bigger challenge than was expected.
When people talk about projects that are small, what they are actually aiming for is "something simple that we can do quickly". However, the problem often arises that scope rises in the allotted time frame. Thus, the trick is to make sure that the systems are in place to be able to keep everything in place. Small projects are risky projects, so it's vital that we trust our team members. We also have to realize that there is no way we will be able to do everything we would like to do. Perhaps swap real-time paring with your developers over having to write bug reports and waiting for a response. Get away from the time killers and project killers; avoid politics, get away from micromanaging and try to not be so beholden to perpetual status updates. Be aware that there is a freedom that can be had with the right attitude. You may not be able to do everything but it's better than doing nothing. Start there :).
---
The next talk was presented by Priyanka Halder. She currently heads the Quality Engineering team in GoodRx. Priyanka's talk is "Taming the dragon: Going from no QA to a fully integrated QA". I can relate to this from a variety of situations in my testing career, where I've either had no QA or we have had to retool the testing that was done previously. I've struggled in a few of these environments and have at times had to make new initiatives just to make headway. Some times I've had lots of automation but it had limited focus or it worked for part of the product and other areas needed coverage but it didn't exist yet. Very often, we had experiences where what worked in one part of the product didn't work in another area. As for the premise of Priyanka's talk, I also know what it feels like to be the first tester in a team that didn't have one previously. That's a special challenge but a real one and it requires a certain kind of finesse. It's not just that we have to test, we also have to make a case as to why we are worthwhile to have on the team.
Very often, the problem that we have to deal with up-front is the issue of testability. If we are coming into a new QA environment with a lot of manual testing, if the goal is to automate (and even if it isn't) placing an emphasis on testability early can reap large dividends.
---
The last talk for the night is being delivered by Brian Lucas of Optimizely. He is responsible for Build, Test, and Release engineering processes. His talk is about "Avoiding Continuous Disintegration at Optimizely". One of the keys to being successful in developing software are certain common traits: speed, engagement, and iterative development can help keep quality high. However, as the product becomes more complex, if the time and effort are not put in to be able to keep that process rolling, things can fall apart quickly. One of the approaches Brian suggests is the idea of releasing faster and shipping quicker comes down to working in smaller increments, shipping more frequently and working through experiments. Components usually work well in isolation but they struggle when they have to work with each other. If your team is playing Dev and QA ping pong, stop it! BY working more openly with QA and doing testing earlier, it is possible to get more coverage even in smaller increments of time. In short, your testers are not going to be able to do all of it and groups that don't prioritize for this will lose that opportunity.
Optimizely crowdsources their QA to their entire engineering team so that all of the testing efforts don't just fall on the heads of the testers. More to the point, rather than to have one person testing a hundred commits, having the developers test their own commits speeds up the process and lessens the load on the testers so that they can focus on more pressing areas.
Continuous Integration, Continuous Delivery, and Continuous Deployment are areas that will take time and care to set up but by taking the time to do it it makes it easier down the line. Ultimately the goal is to improve the feedback cycles and build out the infrastructure in a way that allows for repeatability, testability, and implement-ability in many places.
---
On the whole, a nice first outing Browserstack. Thanks for having us. Also, thanks for the shirt and water bottle. I should get a few laughs at tomorrow's standup when I wear it (hey wait, isn't that... yes, yes, it is ;) ).