How I Learned to Love Environment Proliferation

Peter Sellers as Dr. Strangelove: "...it is not only possible, it is essential. That is the whole idea of this machine, you know."
Peter Sellers as Dr. Strangelove: “…it is not only possible, it is essential. That is the whole idea of this machine, you know.”

Software development is a strange craft. In some ways things seem to stay the same forever; we still sit at the same Unix command line from over 20 years ago and we use usually the same editors, but the software being developed feels pretty different. For one thing, we spend much more time selecting the software we will reuse than we do writing new code. For another, we don’t need actual physical servers anymore; we can just conjure up servers whenever we need them. I love that part.

The constant theme over the last decade or more has been creating and delivering smaller and smaller chunks of new code, faster and faster. I used to develop code that wouldn’t be used by actual customers until more than a year after I originally wrote it. Today, it’s not unusual to get live feedback from customers for changes made just yesterday.

By necessity, the steps required to safely move code from my laptop to a production server have become small, fast, and highly consistent via automation. Agile made the argument that more releases made sense along with the techniques to iterate effectively. DevOps supplied great new automation technology for setting up my applications, and the cloud gives me whatever machines I need—quickly and cheaply.

Today’s software applications often have to be usable by very large numbers of people and they don’t simply run on a laptop or a single server; they require a complex conglomeration of machines, networking, and storage all load-balanced for scale and secured with firewalls. Making sure your software works in such a complex setup, known as an environment, is something pretty different from the programming experience of years past. Help is required.

The need for On-Demand Virtual Environments
As software has become more complex, with databases, caching, message queues and more—it has become imperative to accurately model the production environment in order to do valid testing before deployment to production. Serious organizations have done this via expensive physical replicas, or at least software-identical test environments run in labs. The problem is that as new code changes are being delivered faster, organizations have to schedule, juggle, and constantly reset their test labs.

The test facilities quickly become less like production (and therefore less valid) and more of a roadblock or impediment to real development over time. The need for multiple valid test environments quickly becomes clear as the need to update and maintain environments also becomes imperative.

These test environment needs go beyond newer-age agile shops. Even when an organization has resisted shorter release cycles, more often than not it has added technology and/or legacy applications via acquisition or merger. Getting physical test labs in place to model every production system is becoming an increasing challenge for every type of organization. And when an organization decides to modernize and move towards the widely accepted continuous delivery (CD) model of development, the need for environments grows exponentially.

Essentially, with CD, a “delivery pipeline” is created that charts each step software must go through in order to be considered ready for “promotion.” To progress from one step to the next, a clean test environment should be used to to run whatever tests are required. It’s not unusual for there to be four or five different quality gates per delivery pipeline and it’s also not unusual to have many pipelines in “play.” The number of independent tests (with clean environments) quickly gets beyond what physical test labs can support.

The answer is on-demand virtual test environments. “Virtual” means that the whole test environment is captured in a digital form with no hardware. This includes servers, networking, firewalls, load balancers, caches, VPN connections, and so on, and means that generic data center assets can be used to create the complete environment. “On-demand” means that that you can quickly instantiate a fresh version of your environment. This typically implies a cloud (public or private) that can be used to quickly allocate resources and then discard them when testing is completed. It is interesting that the discarding of test environments ends up being a highly valuable action. This protects against environment drift (via undocumented alterations), which is one of the forces that tends to make test environments diverge from production, and then result in invalid tests. It is much better to start each test with a fresh and validated environment.

Proliferate or die
The only way to break up large scale software development in the small chunks needed to make modern systems agile and stable is to enable lots of parallel, but fully valid tests. Using flexible and automated infrastructure is the only way to support this in an economical way. Having dozens (or even hundreds) of physical test labs, and keeping them maintained is just not feasible, even for organizations that could afford to do so.

The best answer is to turn our infrastructure into code, use cloud services where possible, and stop making testing a bottleneck in our development processes. This approach certainly introduces some new problems, like, “how do we manage and maintain all these environments?” but it solves core problems and allows our products and services to evolve as they must in today’s changing markets.

Too often, tests gets regulated to the hard problem that much of the organization wants to avoid, or pushed to an under-resourced quality assurance organization. But testing is really too important of an activity for this to really work. Testing today has to start shortly after development begins and it changes quickly, adding more fidelity, as each change nears production. It is absolutely worth the investment to make sure that tests have a valid place to “live” that makes them easy to run, easy to build, and easy to maintain. It takes almost every part of the organization working together to get a good suite of tests in place, and every organization that has such a resource swears by it.

Twenty years ago, I wrote a bunch of code, compiled, and built it into an executable (the “environment” was usually just a desktop computer), and then either began painstaking testing of the result myself or I gave my program to someone else who started the test process from scratch. If multiple people’s code went into the test, figuring out what went wrong (and who was to blame) took a tremendous amount of effort.

Modern developers make small iterative changes many times a day, and each one is checked and tested via continuous integration. These small changes are combined with dozens of open source packages, servers are provisioned programmatically to accept these changes, and then updated servers are placed into an environment that allows them to work and scale traffic as required. Ideally, tests get run (automated ones are nice…) that only try to focus on a single source of change. That makes integration pretty easy.

The rub is that we may have tens or even hundreds of small changes to check— that’s where lots of environments pay for themselves. It makes programming easy and accessible again, and I love that part, too.

Learn more about the benefits of creating complete test environments in the cloud by checking out Skytap’s solutions, and follow us on Twitter or LinkedIn to be the first to know when we publish the next chapter of our new technical series, Scaling Modern Software Delivery!

ScalingDeliveryBanner

Join our email list for news, product updates, and more.