How Agile accelerates your business

Daniel Burm

This drawing explains how agility accelerates your business. It is free to use and distribute. Should you have any questions regarding the subjects mentioned, feel free to get in touch.

Deploying a Node.js app to Docker on CoreOS using Deis

Mark van Holsteijn

The world of on-premise private PaaSes is changing rapidly. A few years ago, we were building on on-premise private PaaSes based upon the existing infrastructure and using Puppet as an automation tool to quickly provision new application servers.  We provided a self-service portal where development teams could get any type of server in any type of environment running within minutes.  We created a virtual server for each application to keep it manageable, which of course is quite resource intensive.

Since June 9th, Docker has been declared production ready, so this opens  the option of provisioning light weight containers to the teams instead of full virtual machine. This will increase the speed of provisioning even further while reducing the cost of creating a platform and minimising resource consumption.

To illustrate how easy life is becoming, we are going to deploy an original CloudFoundry node.js application to Docker on a CoreOS cluster. This hands-on experiment is based on MacOS, Vagrant and VirtualBox.
 Read more

Combining Salt with Docker


You could use Salt to build and run Docker containers but that is not how I use it here. This blogpost is about Docker containers that run Salt minions, which is just an experiment. The use case? Suppose you have several containers that run a particular piece of middleware, and this piece of middleware needs a security update, i.e. an OpenSSL hotfix. It is necessary to perform the update immediately.


The Dockerfile

In order to build a container you have to write down the container description in a file called Dockerfile. Here is the Dockerfile:

# Standard heading stuff

FROM centos
MAINTAINER No Reply noreply@xebia.com

# Do Salt install stuff and squeeze in a master.conf snippet that tells the minion
# to contact the master specified.

RUN rpm -Uvh http://ftp.linux.ncsu.edu/pub/epel/6/i386/epel-release-6-8.noarch.rpm
RUN yum install -y salt-minion --enablerepo=epel-testing
RUN [ ! -d /etc/salt/minion.d ] && mkdir /etc/salt/minion.d
ADD ./master.conf /etc/salt/minion.d/master.conf

# Run the Salt Minion and do not detach from the terminal.
# This is important because the Docker container will exit whenever
# the CMD process exits.

CMD /usr/bin/salt-minion


Build the image

Time to run the Dockerfile through docker. The command is:

$ docker build --rm=true -t salt-minion .

provided that you run this command in the directory where file Dockerfile and master.conf resides. Docker creates an image with tag ‘salt-minion’ and throws away all intermediate images after a successful build.


Run a container

The command is:

$ docker run -d salt-minion

and Docker returns:


The Salt minion on the container is started and searches for a Salt master to connect to, defined by the configuration setting “master” in file /etc/salt/minion.d/master.conf. You might want to run the Salt master in “auto_accept” mode so that minion keys are accepted automatically. Docker assigns a container id to the running container. That is the magic key that docker reports as a result of the run command.

The following command shows the running container:

$ docker ps
CONTAINER ID        IMAGE                COMMAND                CREATED             STATUS              NAMES
273a6b77a8fa        salt-minion:latest   /bin/sh -c /etc/rc.l   3 seconds ago       Up 3 seconds        distracted_lumiere


Apply the hot fix
There you are: the Salt minion is controlled by your Salt master. Provided that you have a state module that contains the OpenSSL hot fix, you can now easily update all docker nodes to include the hotfix:

salt \* state.sls openssl-hotfix

That is all there is to it.

Can continuous delivery succeed without management support?

Gero Vermaas

Last wednesday the first meetup of the Continuous Delivery Think Tank was held at Xebia offices in Hilversum. The goal of the Think Tank is to gather people that want to implement continuous delivery in their organisation, to help each other using the collective brainpower of the group. In this first session we explored what the first steps should be when you want to successfully introduce continuous delivery into your organisation. The level of experience varied widely across the participants (from doing 1000s of deployments per day to just starting to explore the possibilities), so there was something to learn and share for everybody. This lead to lively discussion and by using the 6 Thinking Hats method we ensured that   the question was approached from different viewpoints. The conclusion was that trust and cooperation are key and although this can start small, it does have to spread across the organisation to be really successful.

 Read more

Tutorial - Using Deployit Cloud Pack with Amazon EC2 – Part 1

Mark van Holsteijn

Deployit's Cloud Pack provides you with the ability to create and destroy environments on virtualized infrastructure from Deployit. It supports both EC2 and vSphere. In this first part of the tutorial, I am going to show you how to setup Amazon AWS and populate the Deployit repository in such a way that you can create and destroy virtual machines  from the Deployit console.
 Read more

Why even Spray-can is Way Too Slow (for my purposes)

Wilco Koorn

In a previous blog I discussed the speed of the Spray-can web-server and mentioned some measurements I did. My co-worker Age Mooij, committer on the Spray project, pointed me at 'weighttp' (see weighttp at github) a tool for benchmarking web servers. Cool! Of course I now had to do more experiments and so I did. I found out Spray-can is way too slow for my purposes and here's why.
 Read more

On the mysteriously fast Spray-can web-server

Wilco Koorn

I am addicted to a problem: handling unknown peak load on the net. Part of the solution I have in mind involves, of course, a fast web-server. One of the fastest around is Spray-can (see https://github.com/spray/spray-can) and I really like the thing for several reasons I won’t explain here. Anyway, I’m sure you can guess my very first question by now:

How fast is Spray-can really?

 Read more

Developing a SOA-based Integration Layer Framework: Features

Marco Fränkel

A few years ago I was asked by one of our customers to help them make better use of their integration layer. Ever since then me and my team have been working on a framework in support of that. This is the fourth in a series of blogs on the development of our framework, and discusses the features it provides. The one that was announced last time, about building blocks, is momentarily postponed.

So far I've discussed the goals & challenges surrounding the development activities, but I'd like to focus more on the framework itself now, and what it brings to those that are using it.

As soon as a new party (be it service consumer or service provider) connects to our framework, it can profit directly from the wealth of functionality we deliver out-of-the-box. These ‘generic features’ are exactly what one would expect from a (logical) ESB, and are partly based on the Expanded Enterprise Service Bus Pattern.


 Read more

Scripting Deployit

Jan Vermeir

All I wanted to do was create a number of plugins and examples for Deployit using the different techniques available. While working on examples I was frustrated by having to clean up remainders of previous attempts, so following in the footsteps of greater men than my humble self (most notably professor Knuth who created TeX so he could finish writing a series of books on computer science) I first wrote a script to create junk in the Deployit repository and then get rid of it in one sweeping go.
 Read more

Developing a SOA-based integration layer framework: challenges

Marco Fränkel

A few years ago I was asked by one of our customers to help them make better use of their integration layer. Ever since then me and my team have been working on a framework in support of that. This is the third in a series of blogs on the development of our framework, and discusses the challenges we had to meet.

In the previous blog of this series I mentioned the goals we had to reach. Succeeding in doing so of course meant we had to overcome a lot of challenges. In order to keep this blog from reaching the size of one of the books of the 'Lords of the Rings' trilogy, I'll keep it limited to the five below, which together form a pretty good picture of what we had to deal with.

 Read more