The DevHops Podcast Episode 4: Continuous Testing

Dev-Hops-Podcast-Logo

Welcome to the fourth episode of the DevHops Podcast! This week we’re speaking with Parasoft Chief Strategy Officer Wayne Ariola about software quality and the emergence of continuous testing in the enterprise. Let’s get started—we hope you enjoy the show!

Noel:  Hey, DevHops listeners. Just a quick tip that we’re starting this week’s show with. If you’re already familiar with the show, what it’s about, who we are, who are guests are, and you’d like to go ahead and skip right to the meat of the show, you can. Skip ahead to about 3:30 in. Thanks!

Welcome to DevHops, the podcast where ideas about enterprise software development and testing flow freely. I’m your host, Noel Wurst, editor-in-chief of the Skytap blog. Just a little housekeeping: all ideas expressed here are our own, and not necessarily that of Skytap, or the employers of our guests.

Today’s guests are our regular co-host Jason English, Skytap’s Galactic Head of Product Marketing, and joining us today is Wayne Ariola, Chief Strategy Officer at Parasoft. Parasoft is a technology alliance partner of ours, and Wayne is an awesome expert on the costs, challenges, and benefits of enterprise software quality.

We thought for today, we’d talk about continuous testing: what it means, what it requires and what it delivers to the business. We’re also reviewing three beers during today’s show, since we do call it DevHops. We’re just about to crack those open, so let’s get started.

I’m a big fan of Abita out of Louisiana. They make Turbodog and Purple Haze, go-to’s of mine for years, but yesterday I noticed they’ve got a Wrought Iron IPA, named after all of the fancy wrought iron in New Orleans. I have not tried it yet, but am definitely looking forward to it! Wayne, do you want to go next?

Wayne:  Sure. I hate to do this to you guys, but I’m not going to be so hipster in my selection.

Noel:  Ha! That’s just fine.

Wayne:  I’m actually drinking Beck’s, and for a couple of reasons. First of all, my son is named Beck. My household is filled with this nomenclature. The other thing I like about it, though, is it’s a crisp, good, summer beer. I know everyone wants to go to these nice microbrewers, and I love them, too. But for a good, clean, refreshing, ice cold drink, I happen to like Beck’s. I love it.

Noel:  Nice. I need to revisit Beck’s. I have not had one in a long time. Jason, how about you?

Jason:  Well, right now I’m going with whatever is in the Skytap keg, and that is currently Manny’s from Georgetown Brewing in Seattle here. That’s basically an unfiltered kind of pale ale. It’s kind of middle of the road. It’s a semi-commercial beer in that it’s probably owned by a large brewery now, and you can find it all over the place. But it’s a little higher than a Coors Light, I’d say for sure.

Noel:  Cool. Well, let’s dive right in. I’m excited about this one! I wanted to talk about continuous testing. We’ll probably get into service virtualization a little bit as well, and talk about how quality and speed are really no longer in a “host/parasite” relationship. To get one, you don’t have to piggy back off of, or sacrifice the other.

I chose this topic because there’s been a lot of coverage recently of quality failures from some recent airlines, the stock exchange, and I believe there’s a current ISP right now that’s experiencing a major outage as well, and quality is making a lot of headlines. It got me thinking about continuous testing, and if it could reduce the number of failures like these and others.

I always like to start these off with getting a base definition. Wayne, I know Parasoft speaks to continuous testing a lot, and I was curious if you would give your definition. Just how continuous are we talking about here?

Wayne:  If you took the idea of automated … I do a lot of talks on moving from automated to continuous, and the main topic here is bridging the business expectation associated with risk.

For example, today you mentioned it when you started the discussion is speed is of the essence. If you’re going to get differentiable business software out to your consuming public as fast as possible, your business managers must be extraordinarily curious about, “what is that risk, or what does that release candidate pose in terms of business risk if it goes live and happens to go down?”

Today, when I’m visiting clients, this is our gap. They have nice, good, automated testing. But the automated testing doesn’t necessarily point out, “what are those business conditions that you would like to know in order to make a business trade-off decision whether that release candidate goes or not?”

Given the automated tests that happen today, no. Quite honestly, it runs. The battery runs. It’s through some sort of CI process. And, quite honestly, that application is probably going, like it or not. Today, if you want to get to a mode of continuous testing, you’ve got to start answering business questions.

Those are the same business questions that, quite honestly, your auditors who are managing your financial statements, should be asking, right? We’re at the point in time in which the business application which is interfaced to your business … If that goes down, there’s true, defined, palpable business risks.

So, before that release candidate actually gets to a point of ‘go live’, we’ve got to understand what better, in very distinct, quantitative terms, what risk does that potential release candidate pose? Continuous testing is the vehicle to get there.

Jason:  That’s a great point, Wayne. I’ve kind of mellowed out in the years of my definition of continuous testing. I really think that there’s … It’s not even throwing more tests, or even more people at it, right? There has to be a capacity for those people to continuously test. If they are able to do this in parallel as these items are becoming release candidates and about to be shipped, the ability has to be there for them to see what’s the difference between the as-is state and the what-if, and “I put this out.” Do I have clarity into what the consequences could be when this happens?

It is kind of extending what you’re saying definitely with continuous testing. Do I know what the actual risk is, and do I have the capacity to see in a clear way what’s going to happen when this comes out? And that really is the whole universe of continuous testing to me. I think this is definitely where we’re going.

Wayne:  Yes. By the way, I love the usage of the term “consequence.” I don’t think it’s being used enough. Because when you unravel it, what it means is that there is some layer of penalty, or some level of pain associated with having to correct the situation, number one. That’s what we traditionally talk about in testing. You know, what the cost is to correct it.

But that’s no longer the biggest issue. The biggest issue now is what is the true consequence to the business itself when that app goes down? I love that concept of consequence.

Noel:  So what’s really driving the need for this? Obviously it’s a great idea, but is it to be able to maintain the level of quality that your software hopefully already has? Is it because the quality is insufficient, and this is something to get it to where it needs to be? Is it a response to business demands? I’m sure there’s a number of reasons.

Jason:  In this one I just say, from my perspective, I just see it as speed kills, right? I mean, the business is incenting development teams to release software faster than they are able to actually validate it, right? They’re driven by competition. They know if they don’t come out with this feature before the other company, that they’re going to lose in the marketplace. They’ll lose market share.

That is driving everything. That’s the mentality that drives much of development itself, as opposed to, like we talked about, the consequences. In short, I would just say speed kills, and that’s what’s driving continuous testing.

Wayne:  I would also say what we’re facing today is that all organizations have been faced with the idea you mentioned at the beginning of the conversation: there is no longer a trade-off between speed and quality. Yet, when you really press the conversation, speed matters.

I always use the analogy, if you’re going over a speed bump at five miles an hour … You know what? Not so bad. But if you’re going over a speed bump at fifty miles an hour, you know what? You’re going to feel the effect of that. And that’s where we are today. We want this level of acceleration, yet the process hurdles that are currently there and those speed bumps that are currently in the road are going to hit us in a much more impactful manner.

Now there’s also in terms of what’s driving this, there’s unfortunately a ton of history associated with the quality of the task. You can go through a litany of psychological or sociological issues associated with what’s going on in the realm of testing.

QA was considered the bottom rung. Be it the malleable nature of software as the loudest to defer work in terms of creating really quality software up front. The whole technical debt argument. Essentially, what’s going on is our level of expectation from the business perspective is changing. I’ve done some research on this to try to prove it out.

I use the CNN app on my mobile phone every now and then. It’s remarkable to me the number of times that I get actual news notifications about software outages. It happened when United Airlines went down. It happened when American Airlines went down a couple months ago.

And what I do is, I essentially look at, what’s the impact to the stock price associated with that event? Last year in 2014, the net impact of market share loss associated with news announcements was equated to about a -3.75% loss of market capitalization. You did a new release of your software, the software fails. It hit the news and basically your stock price declined 3.75%.

Now I could tell you, if you asked that development manager … This is more anecdotal, but I’m assuming if you asked that development manager, just prior to the release, saying, “hey, what’s the risk of us losing 3.75% of our market share with this code release? They would say zero, nothing, what are you talking about?” Right?

The world is coming to terms with the fact that faulty software equals faulty business. Now the interesting thing about 2014 versus 2015 is that penalty associated with loss of market share has been increasing. Right now we’re on track for a -4.12% loss of market capitalization every time that news of faulty software hits the wire. I think it’s from the fact that the market … The world is becoming much, much more aware that software failure equals business failure.

Noel:  Wayne, there was a recent webinar that you were part of along with Alaska Airlines. The QA manager from Alaska Airlines … I think it was you that asked him what the hardest part was about getting things like continuous testing and service virtualization off the ground.

I think we’ve established today that these are very much in need to avoid these huge losses that can occur. But at the same time, he said that “building trust” was the hardest part of getting it off the ground, which I thought was really interesting. Even once you establish that these things are good ideas to begin doing right now, there’s still a level of trust that has to be tackled.

There wasn’t too much time to get into that there toward the end of that webinar, but I wanted to know what some of the challenges are inside of that. Why would it be difficult to get people to trust that these are the ways to start doing things?

Wayne:  As we know, with large organizations, the science of change is critical. I think we’re at a true reflection point when it comes to the SDLC. I mean, there’s no doubt about it, right?

Things like the cloud, which it seems that we’ve been talking about forever, have become much, much more of a reality. Things like iterative delivery, and continuous delivery, continuous release of applications are becoming much, much more of a reality. What’s going on is that the outcomes and the vehicles to achieve those software outcomes are becoming much, much more real.

Internally, we haven’t necessarily changed our processes when it comes to testing. We’ve looked for more automation, which hasn’t necessarily done much to improve the overall quality of the application. There needs to be, really, a true process reorganization when it comes to actual software quality.

When you talk about those levels or the things we need to be doing differently, and the concept of change, we’re at a massive skill mismatch for your traditional organization. The idea of trust moving from some of those more monolithic, large, traditional quality infrastructure tools towards moving into this with much, much more agility.

From my opinion, we’re missing one major component. For example, from a software quality perspective, 90% of our efforts working at quality from bottom up. Meaning that we have a requirement. The requirement introduces a level of change. I’m going to build a test that’s going to exercise the application from the bottom up to validate a) that the requirement is met. And potentially, potentially, I’m going to be able to measure the impact of change to the over-arching application.

Is that approach needed? Well, absolutely, yes, but we’re spending far too much time on it. What needs to be done is more of this top-down approach to validate more automatically that the end-user experience is not going to be impacted by the changing application.

This is where the tie back into Jason’s comment about the consequence of change … This is where this concept really starts to play, as we more automatically validate the risk of that release candidate top down as it moves through the cycle.

Where we’re looking at this, and we’re talking about trust, it’s kind of letting go of the old in order to try to taste test some of this new stuff. It’s just going to take time.

Jason:  Exactly. In once sense, you’re asking them to believe in something that kind of flies in the face of everything that they had conventionally done, right? It’s kind of like a trust exercise. You’re walking on fire and, if you believe that it’s not going to hurt you, then it won’t, right?

It’s beyond that in that using service virtualization or something like that to replace those constraints, or simulate them away is really a great way to basically have everything you need and have it in a more predictable fashion.

You can actually have all the scenarios, all the data that you need to validate the outside boundary conditions of what could happen as a consequence of your testing. If they can just believe that when I’m starting a project, they’re not just like, where are my servers, right? Give me some hardware and give me the applications.

It’s not even necessary anymore because using things like service virtualization, using cloud infrastructure, which they also didn’t believe you could do at this scale, which we are doing now. You have all this very unconstrained elastic ability that, to build up these processes and parallel and have a much more consistent, stable environment when you think about it. It does seem like it takes a leap of faith, but it’s possible to get teams, once they’ve gone through the process once, to trust it.

Wayne:  You can actually measure, from a simulation perspective, accuracy versus the operational environment to give people the confidence. So it’s there. You’re absolutely right.

I think it’s … Almost, almost I say – every other industry in the world as they’ve evolved the concept of quality within their domain – simulation becomes a massive component of being able to do this stuff earlier and more completely.

It’s been in software because we’ve always used this concept of it’s too complex to shy away from doing much more deep simulation. The type of simulation that service virtualization provides. The idea of trust, I think goes out the window because, no matter what, we could back that trust component up with actual data to ensure the end user, or the tester, or the manager, that that simulated environment is really, really very parallel to the operational conditions, which they’re probably not getting in the stage test environment in the first place.

Noel:  Right. So, shifting left and service virtualization. Two areas that Skytap and Parasoft both talk about frequently. Kind of like that trust thing we were just talking about … I was imagining that, by introducing things like this, you are creating those earlier opportunities to test … I can see where some might wonder how this would be done.

Something like continuous testing, when maybe there’s been certain components that they have never had access to at that early of a stage in the SDLC.

Wayne, could you kind of go into a little bit as far as how service virtualization kind of creates those opportunities that people may not have ever even been able to imagine having access to at all, much less as early as you now can?

Wayne:  The first great barrier, and I know that Jason probably experienced this as well, is, “what is the definition of the application of your test?” And many, many folks, when you start talking about being able to bridge or do end-to-end testing, they don’t necessarily break out of the paradigm of the geopolitically accessible single application stack that is basically going through its upgrade or going through its iteration or its release.

We did a survey a couple of years ago that we continued to collect data on. For a particular or a single application under test for global 2000 organizations, we did a survey, and folks came back and said, on average, a single application under test has, thirty dependencies. Meaning thirty other applications in which it’s dependent upon in order to complete its business transaction.

Then we ask the question, “Of those dependencies, of those thirty applications on average, what do you have access to in a stage test environment?” And the answer came back as six.

So six, out of those thirty dependencies, folks have access to, on average, in a stage test environment. This meant that twenty-four application or transaction hubs were either A) being stubbed out, or B) just not being tested. And, by the way, B was overwhelmingly the response there. They just ignored and it’s not being tested.

Talk about risk, right? You’re kind of letting it fly, and sending it out there.

I wanted to ask a follow up question, “Is rollback a strategy for you guys?” but it never got that far because it would’ve been too complex.

You really, really have situations there where there is a significant exposure to risk. From the perspective of the team that’s testing, the first concept is, “what is the total system under test that you need to have a great purview of?”

Now, I don’t mean to rag on, or necessarily take a shot here at technologists in our field, but 90% of the time when we go in and we help people with the concept of service virtualization, we paint out the system under test for the first time, you lay out from the perspective of the AUT – or the application under test – all its dependencies and visualize that for them—It’s really the first time they really understood the total dependency associated with the application they’re working on.

The first barrier we face moving to continuous testing is understanding the application reach, or the total system under test.

Jason:  When we talk about shifting left, we also want to look at how much time you’re actually spending on creating duplicate infrastructure, creating your own tools basically or coding your own tools, or doing things that have nothing to do with the actual value that I’m delivering into the product.

So am I actually coding the new feature, or am I creating more tools in order to test the new feature, right? Am I actually testing this business functionality, or am I maintaining a bunch of scripts to test the business functionality, right? What happens when the volume of code that exists in this test space is as high as the amount that’s in the application itself, right? You’re basically creating more work for yourself. So what we need to do is eliminate that redundancy.

And that’s why, if you decouple things using service virtualization, using better methods of test data management, and then using a cloud environment where each team can stamp out their own environment and they’re not spending time setting up infrastructure, writing scripts to do so, or requesting it from IT, right?

Every time you see the lead time that we’ve taken for granted, getting put into our software lifecycle—let’s focus on that. How much time can we spend that actually delivers value to the business?

If everybody could think that way all the time, I think they would probably identify this earlier as a problem, right? I value my time. I want it to be delivering something to the bottom line of the business, and that’s what it’s about.

Wayne:  That’s not even talking to the level of consistency, as you mentioned. One developer’s scripts to potentially carve out a test from an application stack doesn’t necessarily equal the same set of scripts on the other guy’s desktop that they’re not sharing. So, this level of complexity and lack of consistency associated with how you’re exercising that application—the complexity goes through the roof. That’s a great concept of when the code to test eclipses the actual application set itself. That’s interesting.

Noel:  Well, that is all the questions I had for today. One plug that I want to throw out there is that on Thursday, August 20th, from 2:00 to 3:00pm Eastern, 11:00 to 12:00pm Pacific, both Parasoft and Skytap are teaming up together for a webinar titled “Why Testers Can’t Test, Part 2, Development and Test Environments in the Cloud.” We’ll have a link to register for that topic here on this podcast and sharing on social media as well. For anyone listening, before the 20th, I definitely recommend registering for that to hear a little bit deeper dive.

Let’s do a quick beer recap before we sign off for the day. I’m pretty much through this Abita Wrought Iron IPA. It was pretty good. I heard someone describe not liking IPA’s earlier this weekend because they felt like each one was like it’s own meal, and that after two or three, they just felt weighted down. This one’s a lot less heavy. I can see drinking these in the sun.

In Louisiana, you probably don’t need something that’s going to just completely floor you that quickly. It’s a little bit lighter than the ones I’ve had in the past. Wayne, is the Beck’s holding it’s own this summer as compared to summer’s past?

Wayne:  It really is, and not only due to the name being the name of my son. I got the one pint, six fluid ounces bottle, and I got to admit, it’s a nice warm up right before going to lunch. I got to figure out a little nap time now.

Jason:  With us being on the Pacific coast, it’s kind of weird to be drinking before lunch, but the Manny’s kind of … I don’t know. It just sort of … It gives me the feeling of, when you go into a bar and it’s like the morning after. I kind of get that feeling from it.

Wayne:  It depends on your state. It might be a good thing, depending on what your evening was like. Maybe it’s an improvement.

Jason: It’s about shifting left and understanding the consequences.

Noel:  Well, again everybody, thank you so much for joining us today. If you are listening to this earlier than Thursday, August 20th, make sure and sign up for “Why Testers Can’t Test, Part 2, Development and Test Environments in the Cloud.” Even if you’re listening after the 20th, we’ll be sure and make the webinar available for your on-demand listening, just as we do these podcasts. Thanks again so much to everyone for joining us today. Have a good one.

Well, that is all for this week’s edition of DevHops. Special thanks to Wayne Ariola and Parasoft for joining us. If there’s a topic you’d like to hear discussed, or even if you’d like to participate in a future episode yourself, let us know. We’d love to have you.

And if you enjoyed this content, I invite you to follow our blog at Skytap.com/blog for more commentary on software development, testing, cloud, DevHops and more. Until next time, keep your head in the clouds and your DevHops fresh. I’m Noel Wurst for Skytap.

Join our email list for news, product updates, and more.