“My application can’t be moved to the cloud!”

My company provides the ability to host IBM Power AIX and IBMi application workloads in the cloud. We partner with two of the world’s largest technology companies to provide this service. During my daily activities as a Cloud Solutions Architect (aka Pre-Sales Engineer), I listen to many customers tell us about their “hopes and dreams” regarding moving legacy workloads to the cloud. These are definitely “Cloud Stubborn”. But when it comes to legacy applications based on IBM Power one of their common responses is:

“It is impossible to move my IBM Power-based application to the cloud.”

Of course, that begs the question “Why not?” The answer is often one of these:

  1. “My application is based on IBM’s AS/400 or more recently called IBMi, or IBM AIX.”
  2. “My application has hard-coded IP addresses compiled into the source code.”
  3. “There is no longer anyone around who knows about the code or applications that are still running.”

There are other reasons that also could be mentioned, but these are the common ones. Let’s debunk each one, or at least come up with a strategy on how each could be approached.

“My application is based on IBM’s AS/400 or IBM AIX”

It is true that just a few years ago, this statement would be the ultimate “blocker” when considering the possibility of moving a legacy application based on IBMi(AS/400) or AIX to the cloud, but today that is no longer the case.

Microsoft Azure, IBM, Google, and a host of smaller players all offer some type of SaaS service where the possibility of moving your IBMi or AIX applications to the cloud is now technically possible. Each provider offers a slightly different range of capabilities, but in the end, the idea is the same. Move your legacy applications based on IBMi or AIX to the cloud without “substantially” changing any of its original architecture. “Substantially” might vary from vendor to vendor, but the main idea is based on a “lift and shift” concept rather than re-architecting.

The service that I’m most familiar with is Microsoft Azure and the ability to host IBM Power workloads running inside an Azure datacenter. Our default presumption is that we can be successful in moving what is known as an “LPAR” (an IBM Power Virtual Machine) to the cloud and not have to change the application architecture. It is true that certain operational techniques must change in the cloud. For instance, there is no such thing as a ” physical tape drive” in the cloud. But as far as the applications go, the default thinking model is “lift and shift”, not re-write.

“My application has hard-coded IP addresses compiled into the source code.”

This one is the most technologically interesting reason and actually comes up more often than you might think. Today applications referencing other servers or services, use some type of abstraction to make that reference to an external entity. Maybe something like DNS or other types of naming service, or using a programmatic variable in the code that reads the actual external reference from a data source that can be updated without changing the actual application code.

But some legacy-based applications come from a code base that is 30+ years old. They did not use naming services or build code variable abstraction into the core logic. If one server had to talk to another server, the IP address of that external server was baked right into the source code and then compiled into the executable object that became the running application. That solution was fine back in that time when there was no cloud, everything ran in the on-prem data center, and no one thought that would ever go away.

I have one customer with that exact problem. Their corporate data center is shutting down. Any new greenfield applications are being created using Azure native services, but they have a substantial application written in AIX that contains “hard coded” IP addresses. The application was based on a software package from a vendor that no longer exists.

They could not do a “big bang” migration and magically move all the application components in a single maintenance window. In fact, the migration would need to take weeks. In the end, we used a “vxlan” type of solution where we were able to extend some of the existing subnets so they simultaneously existed and were active in both Azure and on-prem. That allowed us to incrementally move LPARs with hard-coded IP addresses from on-prem to “the cloud” without having to change anything. Vxlan types of subnet extension technology are not recommended for true long-term production usage, but as a temporary stop-gap measure to facilitate a migration to the cloud, it was perfect.

“There is no longer anyone around who knows about the code or applications that are still running.”

This one is also technologically tough, but instead of deploying the economically undesirable approach of “rebuild it from scratch”, apply a “strangler pattern“.

We’ve already made the case that it is possible to lift and shift complicated IBM Power-based applications to the cloud. Once in the cloud, they exist in the same physical data center as other new applications and services that you are creating. This means that the legacy applications will have low latency talking to the other pieces of the large application landscape. Now that everything is under one roof, it takes a little bit of the time pressure off on how to deal with legacy components. You can begin to use strangler pattern concepts to slowly replace legacy application services piece by piece. You can also switch commodity services to leverage what the cloud might offer. For instance, if your legacy IBMi application written in Cobol or RPG leveraged a file server to store documents, intercept that file server path, and mount an NFS location in Azure Files or Azure Blog, or similar in Google. Just chip away at it in a low-risk, low-cost model rather than attempting a total rewrite that is high risk and high expense.

Summary

In my experience, it is “possible” to move IBM Power applications based on IBMi or AIX to the cloud. Even if it initially appears “impossible”. Sometimes just a little imagination and technical creativity are needed. If you have to exit your data center try everything you can think of to move those legacy applications without disturbing how they are architected or operate. Lift-and-shift into the cloud into a low latency environment with other modern application components you are building. Then chip away at the legacy components. Let them “run forever” in an “as is” mode, or apply strangler techniques and slowly migrate them to modern technology over time.

Join our email list for news, product updates, and more.