Planning for Long-term Test Automation Success [Podcast]
From automating infrastructure and environment provisioning to code/release deployment—and everything in-between—automation is on everyone’s minds these days. While some are just getting their automation initiatives off the ground, others are well on their way and are now looking for what’s next as they attempt to scale their efforts throughout the enterprise.
This week on The Skytap Podcast, we sit down with Utopia Solutions founder and CTO, Lee Barnes. Lee shares his thoughts on how to form a successful, long-term test automation strategy, and together we define realistic test automation statistics and how to determine if your organization is mature enough for an automation implementation.
We highly recommend subscribing to The Skytap Podcast through iTunes or SoundCloud, and we hope you enjoy this week’s episode! Feel free to listen now, download for later, or read the full interview’s transcription below.
Noel: You mentioned during your session that one of your criteria before implementing or increasing automation should be knowing what your goals for it really are. I’m sure different people can have different goals, but where should those goals usually fall under? What will some of those goals usually be, and how do you know if what you came up with are good goals to have?
Lee: I certainly like to steer people away from just focusing on a percentage of test cases without looking into what you’re doing in regression testing. What those activities are, how it lends itself to automation, how it doesn’t lend itself to automation, etc. It’s very difficult just to draw a line in the sand. I like to point people more to, “Are your expectations realistic?”
I mentioned in the session, don’t use the number of defects found as a measurement. That’s really not important from an automation perspective. Are you taking the tedious, the time-consuming stuff away that your high-value resources are doing, and freeing them up to do more valuable things? I think that’s a very important goal. Certainly increasing test coverage. If you can do the same amount of test coverage for less cost, less time, or same cost, same time but increase your test coverage. Those are very common as well.
Noel: Those are all business goals. There’s been a lot of talk this week about a lot of the things that testing used to measure. Things like, the number of bugs found, the number of tests performed, the number of test cases written. None of those things are easy, or maybe even possible to tie directly back to business revenue. All the things you just listed are things that do show that kind of gain.
Lee: They should. They absolutely should.
Noel: One of the other things that you mentioned was looking at your test environments when you’re assessing are you ready for automation. You suggest really looking at your environments to make sure they can actually support automation. That was something I don’t hear a lot of people talk about. What are some of those things that prove that an environment is able to handle that, especially as you increase automation over time?
Lee: Sure. The real measure of that is, “Can you reliably execute your automation solution?” If not, more often than not it’s an environmental issue. There are other activities going on in there. They’re interfering with what you’re trying to do, or they’re interfering with the data. Having, first of all, an understanding of what else is going on in that environment, what that data looks like, what your requirements are. Then, taking steps to isolate yourself, even if there are other activities, and asking, “What can I do to isolate my activity from that?” Then controlling and/or isolating the data that you use as well. Those are really the big ticket items right there.
Noel: One of Skytap’s points is obviously around environment availability. I was in a session earlier today and someone asked, “Who here struggles with environment availability?” Every hand in the whole room went up. Which is great, because that means people need Skytap’s service. But at the same time, this has been years of people still struggling with environment availability, and still trying to share them…trying to do the things that at some point people have got to realize are bad ideas.
Lee: I think so many organizations, from an environment perspective, they kind of think in terms of legacy, “We’ve always done it this way. We’ve always just had this environment.” Even though maybe the technology has changed, their thinking about environments hasn’t changed. With today’s technology and virtualization, there’s a lot of things you can do with environments that you couldn’t in the past, but no one’s putting the time or effort in, or not many people anyway.
Noel: You also talked about some of the ways to go about choosing automation tool that’s right for your organization. We recently gave a presentation, it wasn’t about test automation specifically, but it was about the need for consistency, configurability, collaboration, and control, in any modernization initiative or tooling decision you’re making. You need all four of those four things for it to be a good fit for your organization. I was curious about what other qualities you would recommend ensuring that your automation tools have.
Lee: The one thing I don’t think that people think about enough is, “Is there a community of support?” Especially with automation where you’re essentially developing something. You’ve got your tools, your tool belt and you’re going to build something. What you’re trying to do has probably been done before.
So, is there a community of support out there? Is there enough knowledge out there in the world that you can draw from? Or is it something that’s brand new and different and you’re kind of beholden to just a very small set of people or one company in getting that support?
Noel: You also talked about the benefits of having an application under test that’s in a frequently releasable state, and that it also needs to offer visibility to system changes as early as possible. That obviously brought DevOps to mind, which comes up a little bit more and more at each testing show each year. In the “early days,“ testers were wondering, “Where is testing in DevOps,” but it turns out that testing runs throughout it.
And that creates a benefit that testing can deliver to Operations. Operations gets visibility into the things that are being automated and these dashboards that are automatically populated with really powerful information. It’s closing that feedback loop of, testing isn’t having to put all this stuff together and then find a time to report it to operations. Operations is given constant visibility into all of these applications under test.
Lee: Right. In the past, they didn’t get visibility until they got it, or until they got to production. Now, the quality metrics, the performance metrics, whether they’re development or production metrics, they’re all on the same dashboard. That feedback loop is essentially completely closed now, from all aspects. Not just from a performance aspect.
Noel: One last question for you—Are there some goals for automation that, let’s say you answer that first question, “What are our goals? What are we looking to achieve with automation?” Are there some goals that can come further down the road as you scale automation up that maybe weren’t in your original idea of what you were going to get? Have you seen some later-stage goals revealed to organizations that have really ramped up their automation?
Lee: Sure. I don’t think you want to take the “big bang approach” of trying to think of everything you want to do from an automation perspective from day one. You want to address those process and organizational issues of course, get a framework built, and iterate it over time. But look to things like, “Can this be part of the continuous integration chain? Is there value in other places of the organization? Maybe we can use some of the components to create data for something. Maybe we can help customer support with UAT testing.” Look for other technical aspects that you can build into your framework to make it more scalable—not just other test cases and coverage, but other applications.