How to deploy High Available persistent Docker services using CoreOS and Consul

Providing High Availability to stateless applications is pretty trivial as was shown in the previous blog posts A High Available Docker Container Platform and Rolling upgrade of Docker applications using CoreOS and Consul. But how does this work when you have a persistent service like Redis?

In this blog post we will show you how a persistent service like Redis can be moved around on machines in the cluster, whilst preserving the state. The key is to deploy a fleet mount configuration into the cluster and mount the storage in the Docker container that has persistent data.

Read more →

A High Available Docker Container Platform using CoreOS and Consul

Docker containers are hot, but containers in themselves are not very interesting. It needs an eco-system to make it into  24x7 production deployments. Just handing your container names to operations, does not cut it.

In the blog post, we will show you how  CoreOS can be used to provide a High Available Docker Container Platform as a Service, with a box standard way to deploy Docker containers. Consul is added to the mix to create a lightweight HTTP Router to any docker application offering a HTTP service.

We will be killing a few processes and machine on the way to prove our point...
Read more →

How to write an Amazon RDS service broker for Cloud Foundry

Cloud Foundry is a wonderful on-premise PaaS  that makes it very easy to build, deploy while providing scalability and high availability to your stateless applications. But Cloud Foundry is really a Application Platform Service and does not provide high availability and scalability for your data. Fortunately, there is Amazon RDS, which excels in providing this as a service.

In this blog I will show you how easy it is to build, install and use a Cloud Foundry Service Broker for Amazon RDS.  The broker was developed in Node.JS using the Restify framework and can be deployed as a normal Cloud Foundry application. Finally,  I will point you to a skeleton service broker which you can use as the basis for your own.

Read more →

How to deploy a Docker application into production on Amazon AWS

Docker reached production status a few months ago. But having the container technology alone is not enough. You need a complete platform infrastructure before you can deploy your docker application in production. Amazon AWS offers exactly that: a production quality platform that offers capacity provisioning, load balancing, scaling, and application health monitoring for Docker applications.

In this blog, you will learn how to deploy a Docker application to production in five easy steps.

Read more →

Bootstrapping and monitoring multiple processes in Docker using monit

If you have every tried to start a docker container and keep it running, you must have encountered the problem that this is no easy task. Most stuff I like to start in container are things like http servers, application servers and various other middleware components which tend to have start scripts that daemonize the program. Starting a single process is a pain, starting multiple processes becomes nasty. My advise is to use monit to start all but the most simple Docker application containers! When I found monit while delving through the inner works Cloud Foundry, I was ecstatic about it! It was so elegant, small, fast, with a beautiful DSL that I thought it was the hottest thing since sliced bread! I was determined to blog it off the roof tops. Until.... I discovered that the first release dated from somewhere in 2002. So it was not hot and new; Clearly I had been lying under a UNIX rock for quite a while. This time, the time was right to write about it! Read more →

Dockerfiles as automated installation scripts

Dockerfiles are great and easily readable specifications for the installation and configuration of an application. It is terse, can be understood by anyone who understands UNIX commands, results in a testable product and can easily be turned into an automated installation script using a little awk'ward magic. Just in case you want to install the application in question on the good old fashioned way, without the Docker hassle 🙂
Read more →

Deploying a Node.js app to Docker on CoreOS using Deis

The world of on-premise private PaaSes is changing rapidly. A few years ago, we were building on on-premise private PaaSes based upon the existing infrastructure and using Puppet as an automation tool to quickly provision new application servers.  We provided a self-service portal where development teams could get any type of server in any type of environment running within minutes.  We created a virtual server for each application to keep it manageable, which of course is quite resource intensive.

Since June 9th, Docker has been declared production ready, so this opens  the option of provisioning light weight containers to the teams instead of full virtual machine. This will increase the speed of provisioning even further while reducing the cost of creating a platform and minimising resource consumption.

To illustrate how easy life is becoming, we are going to deploy an original CloudFoundry node.js application to Docker on a CoreOS cluster. This hands-on experiment is based on MacOS, Vagrant and VirtualBox.
Read more →

Tutorial - Using Deployit Cloud Pack with Amazon EC2 – Part 1

Deployit's Cloud Pack provides you with the ability to create and destroy environments on virtualized infrastructure from Deployit. It supports both EC2 and vSphere. In this first part of the tutorial, I am going to show you how to setup Amazon AWS and populate the Deployit repository in such a way that you can create and destroy virtual machines  from the Deployit console.
Read more →

Continuous Delivery Essentials : Autonomous Systems

complex_systemAs the complexity of your IT architecture grows, it becomes increasingly difficult to implement a change by changing a single system. The dependencies may even grow so strong, that a single request requires changes in multiple interdependent systems. To make sure that individual changes on different systems will work correctly together, you need to test all new versions of the  systems working together in an integrated acceptance  environment.   After an extensive test period, you need to release all new versions of the systems to production at the same time. The integrated acceptance environment become a bottleneck for the individual teams, as each team wants to  to test there changes in isolation.   It is clear that the complexity of the IT landscape reduces the time to market of changes.

How did this turn out this way and how can this be avoided?

Read more →