Continuous Delivery across multiple providers
Over the last year three of the four customers I worked with had a similar challenge with their environments. In different variations they all had their environments setup across separate domains. Ranging from physically separated on-premise networks to having environments running across different hosting providers managed by different parties.
Regardless of the reasoning behind having these kinds of setup it’s a situation where the continuous delivery concepts really add value. The stereotypical problems that exist with manual deployment and testing practices tend to get amplified when they occur in seperated domains. Things get even worse when you add more parties to the mix (like external application developers). Sticking to doing things manually is a recipe for disaster unless you enjoy going through expansive procedures every time you want to do anything in any of ‘your’ environments. And if you’ve outsourced your environments to an external party you probably don’t want to have to (re)hire a lot of people just so you can communicate with your supplier.
So how can continuous delivery help in this situation? By automating your provisioning and deployments you make deploying your applications, if nothing else, repeatable and predictable. Regardless of where they need to run.
Just automating your deployments isn’t enough however, a big question that remains is who does what. A question that is most likely backed by a lengthy contract. Agreements between all the parties are meant to provide an answer to that very question. A development partner develops, an outsourcing partners handles the hardware, etc. But nobody handles bringing everything together...
The process of automating your steps already provides some help with this problem. In order to automate you need some form of agreement on how to provide input for the tooling. This at least clarifies what the various parties need to produce. It also clarifies what the result of a step will be. This removes some of the fuzziness out of the process. Things like is the JVM part of the OS or part of the middleware should become clear. But not everything is that clearcut. It’s parts of the puzzle where pieces actually come together that things turn gray. A single tool may need input from various parties. Here you need to resists the common knee-jerk reaction to shield said tool from other people with procedures and red tape. Instead provide access to those tools to all relevant parties and handle your separation of concerns through a reliable access mechanism. Even then there might be some parts that can’t be used by just a single party and in that case, *gasp*, people will need to work together.
What this results in is an automated pipeline that will keep your environments configured properly and allow applications to be deployed onto them when needed, within minutes, wherever they may run.
The diagram above shows how we set this up for one of our clients. Using XL Deploy, XL Release and Puppet as the automation tooling of choice.
In the first domain we have a git repository to which developers commit their code. A Jenkins build is used to extract this code, build it and package it in such a way that the deployment automation tool (XL Deploy) understands. It’s also kind enough to make that package directly available in XL Deploy. From there, XL Deploy is used to deploy the application not only to the target machines but also to another instance of XL Deploy running in the next domain, thus enabling that same package to be deployed there. This same mechanism can then be applied to the next domain. In this instance we ensure that the machines we are deploying to are consistent by using Puppet to manage them.
To round things off we use a single instance of XL Release to orchestrate the entire pipeline. A single release process is able to trigger the build in Jenkins and then deploy the application to all environments spread across the various domains.
A setup like this lowers deployment errors that come with doing manual deployments and cuts out all the hassle that comes with following the required procedures. As an added bonus your deployment pipeline also speeds up significantly. And we haven’t even talked about adding automated testing to the mix…