Robust DevOps practice necessitates modular and automatic construction of software and infrastructure. At Skytap, we leverage our own customer-facing services to construct a delivery pipeline that allows engineers in operations, test, and development to collaborate at every point of the software delivery lifecycle (SDLC). We ultimately deliver working clones of our production environment that can be launched on demand, without incurring the cost of provisioning and deployment.
This article provides a detailed description of this continuous integration and continuous delivery (CI/CD) pipeline, with emphasis on how our process enables us to deliver fully functional pre-production environments. For a discussion of the motivation behind this effort and our observation of how it has improved software delivery at Skytap, refer back to part one of this series.
For this discussion, we’ll begin with an abstract overview of CI and CD as we model them. We’ll then use these models to demonstrate how we generate the toolchain itself. Finally, we’ll show how these tools are combined to produce Skytap Templates as packaged artifacts; engineers can use these templates to deploy their own fully-functional pre-production environments at any time.
Overview of CI/CD at Skytap
For this discussion, we consider the most basic unit of useful software to be the artifact. An artifact is the result of a build job that persists after the job ends. Examples of artifacts in our usage include packaged dependencies (libraries), Jenkins jobs, Docker images, and Skytap Templates.
There are three basic abstractions that we use to model the Skytap CI/CD pipeline: the artifact production model, the continuous integration (CI) model, and an aggregate of these called the continuous delivery (CD) model.
Note that our definition of CI/CD may differ from what you’re familiar with. Some organizations consider automatic deployment to be a step in the CD model, but we don’t. Instead, we treat deployment as a distinct activity from delivery. We use continuous deployment in a few specialized cases (you’ll see later that we automatically deploy Jenga artifacts to a Jenga service and back to our Jenkins server for later work), but we don’t currently employ continuous deployment as a general principle, and won’t explore it in depth in this article.
The Artifact Production Model
Building any software artifact can be modeled as an activity in which you combine
build configuration, source code, and any required external artifacts to produce a new artifact:
We use this simple idea to model build and delivery jobs of varying complexity. In some cases, we might not require the source code (if we’re simply combining artifacts from previous builds, for example). Sometimes we don’t need external artifacts (for example, if we’re building a primitive that doesn’t require external dependencies). Additionally, configuration may be implicit—it’s always there, in some form, but sometimes an artifact-producing build includes configuration that you don’t see (configuration of the build server itself, for instance, is implicit for most builds).
The Continuous Integration Model
The goal of CI is to automatically build and validate a new artifact for every change. Builds in the artifact production model are not necessarily automatic or validated, and do not necessarily occur after every check-in.
If we zoom in on a build process from the artifact production model, we can configure that build as a series of steps that automatically trigger their downstream neighbor. Some of these steps are responsible for building the software, and some of them are responsible for validating the source code or the produced artifact. This gives us a CI model that looks like this:
Every process using the CI model begins with a check-in (code, configuration, artifacts) and ends with either a failure in some stage or by producing a validated software artifact.
It’s valuable to consider the CI build model as a sequence of steps in a single build process, rather than as a series of chained builds—it doesn’t make sense to treat each step as a discrete build, because there isn’t a useful artifact left over after each step.
The Continuous Delivery Model
When we combine the artifact production model and the CI model, and use the construction of upstream artifacts (instead of code check-ins) to trigger new builds, we get a slightly more complex aggregate. The result is our CD model:
The CD model will automatically trigger a build when there are changes to source artifacts, and the build process will automatically produce, package, and deliver a new artifact.
Astute observers may notice the resemblance to the artifact production model—we feed configuration and artifacts into a well-defined process, and deliver a packaged artifact to wherever it needs to be (package management systems, Docker image registries, etc).
You may also notice that the build process itself is very similar to the CI model, with a series of discrete steps that automatically produce and validate the desired result after each upstream change.
The CD model combines the artifact-production model and the CI model to create something that’s specifically concerned with combining modular components into self-contained packages. It’s distinguished from the CI and artifact-production models by these traits:
- Neither CI nor artifact production are concerned with delivery. Delivery-specific activities include: integrating external dependencies, packaging (for example, creating a Debian package or a Docker image), and shipping the resulting artifact somewhere that it can be used for deployment or downloaded directly (for example, a package manager).
- Some of its inputs come from upstream builds. Build triggers for CD are other builds, while CI builds are triggered by changes to source code or configuration.
The CD model is therefore a specialized case of the artifact production model that is specifically concerned with combining the results of other builds into a new, aggregate artifact and preparing that artifact for deployment.
It’s important to distinguish CD builds from simpler builds because it gives us two important points of flexibility. First, we can more effectively distribute builds and allow individual teams the autonomy necessary to manage their builds. Second, it allows us to structure a delivery pipeline as a set of discrete stages. Discrete stages help avoid unnecessary build work (we only need to build the components which have changed, rather than the entire platform), and they help to establish a clear, modular chain of events that starts with a check-in to any one of many services before ultimately producing complete, pre-packaged environments running the entire Skytap platform.
Our Implementation
Let’s take a look at how we implement these models.
We’ll start with the build server and then move on to how we deliver our tools back into the build ecosystem (including, Jenga, our environment provisioning tool). Finally, we’ll look at how this build process is combined with Jenga and the power of Skytap Templates to produce full working environments as artifacts.
Building The Build Server
Our CI/CD workflow begins with a Jenkins server. The Jenkins configuration, including plugins or other server-scope items, is managed with Puppet. We install and configure Jenkins on a server that is launched from a Skytap template. If you’re keen to apply the models outlined above, the artifact production model would be a good place to start: the inputs are the Skytap template, Jenkins and its plugins, and Puppet modules. The build process is running Puppet, and the artifact it produces is a functional build server.
With a functional Jenkins server, we’re ready to configure our build jobs. “Jobs,” in this usage, are any processes that we run on the Jenkins build server—this includes build jobs that produce service artifacts, garbage collection jobs to remove old templates, packaging and delivery jobs, and upkeep jobs that keep the Jenkins server configuration up to date. We treat our Jenkins jobs as artifacts, and produce them with the artifact production model, like so:
Note that we’re producing these jobs from (essentially) source code: by defining jobs with YAML, we can leverage the full power of source control to manage them. Instead of relying on finicky and time-consuming manual configuration, we can recreate all of the jobs on a new build server with minimal effort.
When we provision the build server, we use a shell script to bootstrap an initial, top-level “updater” job. This updater job has SCM hooks that trigger Jenkins Job Builder to automatically produce new jobs each time a change is made to the job’s YAML configuration. The updater will reconfigure itself in this manner, and will also configure several “child” updater jobs. The hierarchy of updaters looks roughly like this:
Generally we’ll map a Jenkins project to a Skytap service, such that one project produces artifacts for one service. Maintaining a separation between the top-level “updater” jobs (which set up the bootstrap for individual projects and the updaters for each individual project), we gain two important pieces of flexibility:
- Teams can build a CI/CD process that makes the most sense for them, while maintaining clean separation between the build pipelines used by each team. Smaller, more modular builds make the build infrastructure less fragile.
- Organizing jobs at the project level allows us a lot of flexibility in organizing build slaves and managing the build infrastructure without needing to understand the specifics of every project.
We have additional organizational constructs for maintaining build pipelines for current, previous, and upcoming releases; these all follow a similar process.
With this system, we’re currently managing around 400 different jobs. We expect that the modularity of the process will allow us to trivially scale the build process as the organization continues to grow.
Building Jenga
After the build server, the next component in the environment delivery pipeline is our custom-built provisioning and deployment tool, Jenga.
At this point I’m sure you won’t be surprised to learn that we use CI and CD to produce Jenga artifacts. The artifact production model for Jenga is straightforward:
The CI model is pretty typical for a Python project:
The CD model is more interesting. At this point, we package Jenga as a Docker image, and deliver it to three endpoints: a host running Jenga as a service, the Jenkins server, and our private Docker registry. Each of these endpoints uses Jenga to serve a different need; we’ll discuss these three options in the “How it Works For Us” section.
Building Environments
With Jenga automatically delivered back to Jenkins, we simply need to apply the same three models outlined above one more time to build Skytap Environments.
The artifact production model for Skytap environments looks like this:
And the CI model:
At this point, we’ve produced a full environment as an artifact, which we can save as a golden Skytap template. That brings us to the end the journey, right?
But wait, there’s more! How about we feed those templates back into a job (as an upstream artifact), deploy an environment from the template, and perform some validation on that new environment? While we’re here, let’s clean up any older templates to save some resources. Adding that to a CD model is almost trivial (although fiddly bits of full system testing are another story, of course).
How Continuously-Delivered Environments Work For Us
Applying CI and CD at each phase of the build and delivery pipeline allows us to apply similar abstractions to each phase of the process. This makes it easier to reason about the system, modularize it, and to build more complex artifacts atop simpler, automatically-constructed components.
CI and CD obviously aren’t unique to Skytap, and neither are custom automated environment provisioning tools like Jenga. They’re great tools of course, but the really spectacular thing here is being able to take working Skytap Environments in their entirety and treat them like any other software artifact. Skytap Templates provide us this power.
We really can’t understate the benefit of deploying prepared environments from templates instead of building environments from scratch. Automatic provisioning is time-consuming, and when things go wrong, you need a fair bit of knowledge about how Puppet and the Skytap infrastructure work to resolve problems. With ready-to-go templates, we’re often able to save engineers entire workdays that might otherwise be spent waiting on Jenga.
We use our CD process to deliver several “off the shelf” templates, each of which can instantiate environments for several common needs:
- Environments with the minimum set of infrastructure needed to test services
- Small environments that reproduce all of the components in production, such as software-defined networking and VM hosting infrastructure
- Larger “almost production” environments with all of the infrastructure and redundancies required to emulate production behavior.
This is a huge timesaver for individual engineers. If you want one of the standard “off the shelf” variants of the environment, you don’t even need to build it yourself; you can just start an environment from the template (at the specific version you want). That’s it!
“Off the shelf” templates won’t serve every purpose, which is why we continue to vend Jenga as a Docker image, and continue to maintain a Jenga service. Here’s a breakdown of the options available when engineers need a pre-production environment:
For most needs, engineers are interested in testing their service on a recent pre-production environment; they don’t need to customize the infrastructure or change much related to which services are deployed. In this case, the first option (copying a pre-built artifact) is the most appropriate.
An intermediate may need to customize the environment. For instance, the engineer might need to add or remove hosts, networks, or services. In this case, it’s usually appropriate to run the actual provisioning process to build a custom environment. The Jenga service provides a command line interface, which simplifies copying an existing configuration and starting the Jenga process. The engineer can modify this configuration as needed before they start provisioning. The service handles the details of running Jenga and notifies the engineer when it’s complete.
Finally, some custom needs require that you run Jenga directly. This is common if we’re developing new features in Jenga or creating a new type of environment template.
For a broader discussion of how these tools and our process have improved our ability to release software, refer back to part one.
Final Thoughts
DevOps is about ownership of a product throughout the SDLC; our tools and the consistent application of similar practices across engineering teams make this broad ownership possible.
Automation is essential to this practice, and having the ability to provision and configure an environment on-demand is a great milestone in establishing DevOps practices in your organization. However, even in the best cases, the complexity and time commitment of ad-hoc provisioning can be burdensome. Building a new environment from scratch on each change, or each time you need an instance of the environment, involves a lot of wasted effort. It’s like installing software—sometimes you need to build it from scratch, but most of the time, you just need to install a package that does something useful.
By delivering templates as standalone artifacts, we’ve been able to mitigate much of the pain inherent in provisioning. Engineers don’t have to wait for environments or manage environment builds; this empowers them to focus on whichever aspect of the SDLC is most meaningful to them, without building silos around the components that other engineering specialties will find more meaningful.
We’re proud of the platform that we’ve built, and we’re proud of the DevOps culture we’re empowering with it. But most of all, we’re excited to see the awesome things our customers build with Skytap. Go forth, and embrace the power of DevOps today!