The DevHops Podcast Episode 7: Digital Transformation with CA
This week we sit down with Justin Vaughan-Brown, CA Technologies’ Global Digital Transformation Lead and Jason English, Skytap Galactic Head of Product Marketing to discuss digital transformation in the enterprise. What has led enterprises to make this transformation, what role do DevOps and agile play, and what are the benefits of modernization initiatives? We discuss all this and more in this week’s episode.
Noel: Justin, what some of the ways that you see organizations today modernizing or transforming their legacy apps and their processes around software delivery? And what kind of cultural changes are required for those types of initiatives to succeed?
Justin: Many organizations have a huge amount of legacy technologies that they can’t just immediately get rid of because this would be like a heart transplant. Even if they’re migrating off certain platforms over time, a lot of these technologies have a huge amount of inherited business logic that still remains very, very relevant.
They just need to get it working better than it has before. This is where Gartner had this concept of Bimodal IT, where you keep, in a sense, the lights of the engines running around the core business. This is making sure you don’t have any outages or delays, and the fundamental infrastructure is still performing.
At the same time, you have a kind of a mode-two area which is your innovation engine which goes out and seeks new ways of doing things and brings in truly creative approaches and processes. Over time, that becomes normalized and industrialized, and then it becomes, not so much mode-one, but it becomes part of what we do every day. It becomes part of the core.
Over time, the way that the organization generally does its day-to-day activities, the way they’re carried out, becomes more contemporary and more modernized, but it does it through this kind of processes of osmosis through this mode-two model where it’s picking and experimenting in new approaches.
Jason: That’s a really good approach, Justin. I think we’re looking at bimodal IT as basically if you have these hardened and important systems that are running the business, you don’t want to interfere with that ongoing operation, obviously. You want to position yourself to be able to deliver some new functionality to meet some customer needs, and that can be really hard to do. I think a lot of companies have treated those two items as separate goals, right?
What we’re looking at now is the ability to unwind functionality from those core systems so that you can actually make changes when you need new functionality. If it’s to meet a certain customer service need or to add a new logistics module, for instance, you have to be able to have that existing technology in a safe place and also unwind that in order to do innovation on top of it.
A lot of it really has to do with how the business is incenting people to work together, and how to make this happen in a safe way so that they both have one shared goal, right? That’s kind of the cultural side of it.
Justin: Exactly, and this is one point I made in a recent article on bimodal IT article. You can’t just have folks in mode-one feeling that either their job is threatened, or that they’re in the dull, unexciting part of the business. They want to feel that they’re still relevant.
This is where service virtualization has a great play in terms of increasing and improving mainframe dev/test environment availability, so you don’t have those restrictions and constraints. The mainframe side of the business isn’t seen as holding up innovation or development.
To your point, Jason, I think the availability of these environments on demand is also helping as well. For example, I was at Gartner Orlando a few weeks ago, and this was a very, very strong topic that came up. The ability of the organization to spit out and create those environments and move and adapt to changing market dynamics is super important.
Noel: Justin, I read an article you did earlier this year where you asked your guest for some good starting points for embarking on any large-scale digital transformation, and your guest said that you really need to clearly define early on what digital success looks like. I absolutely agree with that, and I wrote something recently about that exact same concept.
For something like bimodal IT, where you have these two different groups, can that be difficult to assign a single definition of success? Is it hard for teams to collaborate and work together towards a common goal if they’re trying to innovate within two different types of systems?
Justin: Yeah, absolutely. I think one of the fundamental challenges that you have is that the classic mode-one group are tasked on minimal downtime and disruption. Yet, the mode-two crew, who are primarily driving ahead with more of a DevOps approach, trying to work with the operations folks and pushed together a new way of working is to release those applications quickly to have a faster deployment velocity, and quite often they have fundamentally different goals.
If you come to the end of a quarter, you can have one group that objectivized minimal downtime and zero outages, so they won’t want that major app to be released. Yet, you can have a hotshot dev team who’s screaming to have an app released because it’s super important for the business, and they’re objectivized on having that live. This is where you need to bring everyone together to have those common goals—and be measured against those—and have far more open communication and transparency around what’s not working.
For example, without that kind of finger pointing or blaming, which is one of the big barriers because people can manipulate data and information to their own advantage, but if you have this environment and culture which says, “We’re going to put everything on the table. We’re not going to blame anyone. We’re going to just get to the root cause of an issue,” that’s a far healthier place to work in.
Jason: Yeah. A lot of it has to do also with maintaining a status of knowing where the other team is in their lifecycle. Whether that’s a matter of coordinating schedules, or just even having a snapshot of their environment and what’s happening to that environment at that exact time, right?
As changes are being made, I should always have something I can refer to that tells me what the last good, known condition was, right? Where did this impact start to arise that I can share between a development and a test team, and then to IT ops, in case there’s something new introduced?
Another aspect of it, just from a technology perspective, are configuration changes, and that’s almost another half of the whole process, right? That’s what it takes to bring IT ops to the table, is to really understand the impact of how systems are configured and deployed, and not just the features that were introduced. What impact does that have on the overall technology footprint that’s out there?
Justin: Completely great. It’s that kind of governance which I think is going to be increasingly important if you look into next year. With DevOps approaches gaining ground, you’re going to have more and more teams being a bit more open and flexible in the ways they work. Yet, that doesn’t negate the need for that traceability and auditability of what was released, who approved it at which stage, and who finally pushed the button and said, “Yes, we go live.”
Particularly in the area of banking or financial services, you’re always one day away potentially from an auditor coming back to say, “Where is the paperwork? Where is the documentation relating to this decision?” If you can immediately deliver that and show that full audit trail, that puts you in a lot stronger position. I completely agree, Jason.
Noel: Justin, you brought up service virtualization earlier, and that’s an area that we’ve spent a lot of time talking about here at Skytap. It’s interesting—there’s definitely still a lot of companies that are just now beginning to look into it, but I think that’s probably in response to the growing number of large enterprises who have introduced service virtualization into their testing efforts.
It’s allowing them to test a lot sooner, and against systems that are either very difficult to access or have been impossible to access at all. There have been some really cool things that testers in particular have been able to do with service virtualization technology.
Justin: I completely agree. I think what we’re seeing with service virtualization is almost where at the very beginning, you had to persuade people that this thing does actually work. It felt kind of in a way that people were not only wanting to see that but, to use an analogy, to see someone take a parachute, jump out of a plane and survive. And then they want to see that most of the folks in the plane already have jumped out with the parachute and survived. Not only that, they just got back in another plane and went back up again and did the skydive once more. They want that extra, extra insurance that it’s going to work.
I think now we have such a huge bank of references in that area around the globe with some major, major names that such as Nordstrom, that are relying on CA Technologies service virtualization to deliver this ability to have parallel development tracks, and so forth. It’s now reaching a much higher level of maturity.
I think the combination of service virtualization with Skytap’s ability to provide Environments-as-a-Service takes that value and makes it a kind of “1+1=3.” Jason, do you want to just expand in terms of the integration points that Skytap offers?
Learn More: Skytap for CA Application Delivery
Jason: Yeah, it’s particularly interesting, especially as I’ve been talking to different customers. Some of them have environments full of virtual services that mirror almost every aspect of the system. If you do it to such a high degree of fidelity that you’re basically creating another work center for yourself, then maybe you’re trying to out too much logic into the service virtualization itself.
The best use of it is obviously when you’re quarantining off things that you don’t have access to, or where you need a specific set of responses. Those are what you include in your environments. It helps you kind of separate yourself from everything that you’re developing or testing that’s out of scope for you.
Then in that environment you want to have everything that you directly have an impact on or control over. If the data is something you should have access to, then that should actually be present as an actual system. Same with an application and the code that goes through the application.
All of that works in Skytap. For instance, you would basically just have an environment that would represent as much of your production-like environment as you need—but you would have virtual services standing in for various items that aren’t available or that are third party items. Or things like data virtualization that fulfill the need for secure test data or things that you shouldn’t have access to.
Then you use that as your center of excellence for saying, “Okay, I’m going to have release automation driving releases into this system. Now, I’m going to coordinate that,” and that’s kind of the backend for DevOps as we see it.
Now, it has a process. We could go all day on this because … I mean, you were there, Justin. We put out a book on it even.
Justin: Yes we did.
Jason: That book has many ways to of apply different flavors of service virtualization, for sure.
Justin: Very good point. Thanks, Jason.
Noel: We’ve kind of been talking about the need for access to these systems under test or dev/test environments on demand. I read another recent piece of yours, Justin, where you talked about the relationship between DevOps and open source software, and it reminded me that all of these things are changing the way that software is being developed, and the cultures that surround them.
Letting people continue to work with the open source software and the tools that they’re used to working with, and giving people access to the environments they need when they need them—that’s going to lessen the blow of introducing changes to process or culture this big, or transformative.
Justin: I completely agree. We actually had a paper that was written by the analyst firm Freeform Dynamics a few months ago on this very topic of how great open source tools are being wholeheartedly endorsed and approved by many in the DevOps community, and there are some really, really good technologies out there that people are using to good effect.
A couple of changes that sometimes happen with that is that you can get folks who become experts in scripting and creating certain recipes so to speak in terms of how they’re creating and putting together all these actions. But becomes dependent on what I would call the subject matter expert, that someone who knows specifically how to do this. If they go on vacation or if they’re off sick or if they leave the company, that potentially leaves a hole in terms of how is this set up before who had that knowledge?
The great thing with CA release automation is that you have this kind of unified management console that brings together all of these technologies into a manifest-driven deployment engine that uses what we call a zero-touch approach where it’s maximizing the automation. It’s using intelligent workflows, and it’s building that master-view across the whole release management or deployment process, irrespective of how many technologies you have.
Coupled with what Skytap can offer, I think you referenced, Jason, it’s these “golden template architectures” that add even more value to that.
Jason: Yeah, I would definitely say so because, in order to have a DevOps process of this scale, you don’t want it to depend obviously on one subject matter expert. We would always say, “Well, what if a rock dropped on you right now?” The Phoenix Project covers that in detail. There’s the one expert who tends to run around and do everything. Then the rest of the company depends upon that person. They become a bottleneck, right?
It’s the same with technologies. I think DevOps by its very nature is conducive to using open source, or at least open standards so that you’re open to integrating with other pieces of software and trying to build something that will work going forward without being locked into a specific process.
That’s one of the neat things about the way that CA Release Automation works. It really doesn’t matter how applicated that deployment is. It’s the same way when you take that resulting environment and you clone it very rapidly. All the people who are using it don’t need to be experts in the entire infrastructure and how those applications are put together. They need to be domain experts in their own space, and it will start contributing immediately without that kind of delay.
I think it’s really interesting how it’s come together lately.
Justin: I completely agree. What you’re then doing is depersonalizing the application builds deployment release processes. You’re making it something that’s repeatable, and not down to a particular individual’s ability. I think that’s absolutely spot on.
To learn more about how Skytap and CA Technologies increase the velocity of successful software delivery—from code check-in to production—check out our recent case study.