cloud

Created an open source VSTS build & release task for Azure Web App Virtual File System

Geert van der Cruijsen

I’ve created a new VSTS Build & Release task to help you interact with the (VFS) Virtual File System API (Part of KUDU API of your Azure Web App). Currently this task can only be used to delete specific files or directories from the web app during your build or release workflow. It will be updated in the near future to also be able to list files or to upload / download files through the VFS API

The reason i made this task was that i needed it at my current customer. We’re deploying our custom solution to a Sitecore website running on Azure web apps using MSDeploy. The deployment consists of 2 parts: an install of the out-of-the-box Sitecore installation and the deployment of our customisations. When deploying new versions we want to keep the Sitecore installation and MSDeploy will update most of our customisations. Some customisations however create artifacts that stay on the server and aren’t  in control of the MSDeploy package that can cause errors on our web application. This new VSTS Build / Release task can help you delete these files. In the future this task will be updated with other functionality of the VFS API such as listing, uploading or downloading files.

The task is available in the VSTS Marketplace and is open source on github.

Let’s have a look how to use this task and how it works under the hood.
 Read more

Top 5 Ingredients for developing Cloud Native Applications

Tom Höfte

Introduction

Cloud Native Applications is a trend in IT that promises to develop and deploy applications at scale fast and cost-efficient by leveraging cloud services to get run-time platform capabilities such as performance, scalability and security out of the box. Teams are able to focus on delivering functionality to increase the pace of innovation.  Everything aimed to stay ahead of the competition. Companies such as Netflix and Uber disrupt their markets by leveraging cloud native capabilities to quickly introduce their products at a global scale. Adapt or die.

This article serves as the start of a serie of articles. The goal of this initial article is to explain the why and how of cloud native applications by defining the top 5 ingredients and their rationale. In follow-up articles, I will explain the ingredients in more detail. Read more

Understanding serverless cloud and clear

Martijn van Dongen

Serverless is considered the containers’ successor. But although it’s promoted heavily, it still isn’t the best fit for every use case. By knowing what its pitfalls and disadvantages are, it becomes quite easy to find the use cases which do fit the pattern. This post gives some technology perspectives on the maturity of serverless today.

 Read more

Adding an Azure web app to an Application Service Environment running in another subscription

Geert van der Cruijsen

Web apps and Api apps  in Azure are great, however when using them you have to agree to have them connected to the internet directly without the possibility of adding a WAF or other kind of additional protection (next to the default Azure line of defense). When you want to add something like that you have to add an Internal Application Service Environment to host your apps so you can control the network access to these apps.

App Service

However adding an Application Service Environment is quite costly if you are only running a few apps in them. (Minimum requirements for an Application Service Environment are 2 P2’s and 2 P1’s to run the Application Service Environment (ASE)

In our case adding an ASE was fine except that we have a scenario where we have quite a lot of subscriptions and most of them are quite small running only a couple of apps in them. Adding an ASE for each subscription was going to become a bit to costly so we came up with the idea of creating 1 central subscription called “Shared Services” where we would host things that multiple departments could share such as WAF functionality, the VNet, the Express route and also the ASE.

After creating the design we ran in to some problems actually implementing it because we weren’t able to select an ASE in another subscription which was part of the same enterprise agreement when creating an App Service Plan or Web App in Azure.  After checking it seems that this is a limitation of the Azure Portal and we had to use ARM templates to create our web app. This didn’t matter because we were planning on using ARM templates anyway. so we started to give it a try.

At first we had some trouble adding the ASE as our hosting environment. we tried adding the “HostingEnvironment” to point to the name of the ASE in our other subscription but this did not work and we kept receiving errors like “Cannot find HostingEnvironment with name *HostingEnvironmentName*. (Code: NotFound)”

ASE erorr message

 

After that we tried to remove the “HostingEnvironment” property and only set the “HostingEnvironmentID” to directly link to the full resourceID of our ASE. this did get our hopes up because we were able to deploy the web app, however it was running on the P1’s that were part of the workerpool of our internal ASE but it still had a public dns name and was accessible from the internet. I guess we weren’t supposed to created it this way. so i asked help from the Microsoft product team and they pointed me to the right correction.

It all boils down to using a newer API version of the Web App and App Service Plan ARM template API than that are generated in visual studio when building ARM templates. we had to use apiVersion: 2015-08-01

in here we can set the “hostingEnvironmentProfile” to the full resourceID of our ASE for both the App Service Plan as the Web App. Next to that we also have to set the sku to the correct worker pool within our ASE.

Now when we try to deploy our ARM template it will actually create an App Service Plan and Web App in another subscription than where our ASE is running. Nice!

Hopefully this post will help you when you run in to the same problems i did when trying to deploy web apps in an ASE using ARM templates.

Happy Coding / Deploying

Geert van der Cruijsen

The post Adding an Azure web app to an Application Service Environment running in another subscription appeared first on Mobile First Cloud First.

How to create the smallest possible docker container of any image

Mark van Holsteijn

Once you start to do some serious work with Docker, you soon find that downloading images from the registry is a real bottleneck in starting applications. In this blog post we show you how you can reduce the size of any docker image to just a few percent of the original. So is your image too fat, try stripping your Docker image! The strip-docker-image utility demonstrated in this blog makes your containers faster and safer at the same time!

 Read more

How to deploy composite Docker applications with Consul key values to CoreOS

Mark van Holsteijn

Most examples on the deployment of Docker applications to CoreOS use a single docker application. But as soon as you have an application that consists of more than 1 unit, the number of commands you have to type soon becomes annoying. At Xebia we have a best practice that says "Three strikes and you automate" mandating that a third time you do something similar, you automate. In this blog I share the manual page of the utility called fleetappctl that allows you to perform rolling upgrades and deploy Consul Key value pairs of composite applications to CoreOS and show three examples of its usage.


fleetappctl is a utility that allows you to manage a set of CoreOS fleet unit files as a single application. You can start, stop and deploy the application. fleetappctl is idempotent and does rolling upgrades on template files with multiple instances running. It can substitute placeholders upon deployment time and it is able to deploy Consul key value pairs as part of your application. Using fleetappctl you have everything you need to create a self contained deployment  unit of your composite application and put it under version control.

The command line options to fleetappctl are shown below:

fleetappctl [-d deployment-descriptor-file]
            [-e placeholder-value-file]
            (generate | list | start | stop | destroy)

option -d

The deployment descriptor file describes all the fleet unit files and Consul key-value pair files that make up the application. All the files referenced in the deployment-descriptor may have placeholders for deployment time values. These placeholders are enclosed in  double curly brackets {{ }}.

option -e

The file contains the values for the placeholders to be used on deployment of the application. The file has a simple format:

<name>=<value>

start

starts all units in the order as they appear in the deployment descriptor. If you have a template unit file, you can specify the number of instances you want to start. Start is idempotent, so you may call start multiple times. Start will bring the deployment inline with your descriptor.

If the unit file has changed with respect to the deployed unit file, the corresponding instances will be stopped and restarted with the new unit file. If you have a template file, the instances of the template file will be upgraded one by one.

Any consul key value pairs as defined by the consul.KeyValuePairs entries are created in Consul. Existing values are not overwritten.

generate

generates a deployment descriptor (deployit-manifest.xml) based upon all the unit files found in your directory. If a file is a fleet unit template file the number of instances to start is set to 2, to support rolling upgrades.

stop

stops all units in reverse order of their appearance in the deployment descriptor.

destroy

destroys all units in reverse order of their appearance in the deployment descriptor.

list

lists the runtime status of the units that appear in the deployment descriptor.

Install fleetappctl

to nstall the fleetappctl utility, type the following commands:

curl -q -L https://github.com/mvanholsteijn/fleetappctl/archive/0.25.tar.gz | tar -xzf -
cd fleetappctl-0.25
./install.sh
brew install xmlstarlet
brew install fleetctl

Start the platform

If you do not have the platform running, start it first.

cd ..
git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service
git checkout 029d3dd8e54a5d0b4c085a192c0ba98e7fc2838d
cd vagrant
vagrant up
./is_platform_ready.sh

Example - Three component web application


The first example is a three component application. It consists of a mount, a Redis database service and a web application. We generate the deployment descriptor, indicate we do not want to start the mount, start the application and then modify the web application unit file to change the service name into 'helloworld'. We perform a rolling upgrade by issuing start again.. Finally we list, stop and destroy the application.

cd ../fleet-units/app
# generate a deployment descriptor
fleetappctl generate

# do not start mount explicitly
xml ed -u '//fleet.UnitConfigurationFile[@name="mnt-data"]/startUnit' \
       -v false deployit-manifest.xml > \
        deployit-manifest.xml.new
mv deployit-manifest.xml{.new,}

# start the app
fleetappctl start 

# Check it is working
curl hellodb.127.0.0.1.xip.io:8080
curl hellodb.127.0.0.1.xip.io:8080

# Change the service name of the application in the unit file
sed -i -e 's/SERVICE_NAME=hellodb/SERVICE_NAME=helloworld/' app-hellodb@.service

# do a rolling upgrade
fleetappctl start 

# Check it is now accessible on the new service name
curl helloworld.127.0.0.1.xip.io:8080

# Show all units of this app
fleetappctl list

# Stop all units of this app
fleetappctl stop
fleetappctl list

# Restart it again
fleetappctl start

# Destroy it
fleetappctl destroy

Example - placeholder references

This example shows the use of a placeholder reference in the unit file of the paas-monitor application. The application takes two optional environment  variables: RELEASE and MESSAGE that allow you to configure the resulting responses. The variable RELEASE is configured in the Docker run command in the fleet unit file through a placeholder. The actual value for the current deployment is taken from an placeholder value file.

cd ../fleetappctl-0.25/examples/paas-monitor
#check out the placeholder reference
grep '{{' paas-monitor@.service

...
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name %p-%i \
 <strong>--env RELEASE={{release}}</strong> \
...
# checkout our placeholder values
cat dev.env
...
release=V2
# start the app
fleetappctl -e dev.env start

# show current release in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start is idempotent (ie. nothing happens)
fleetappctl -e dev.env start

# update the placeholder value and see a rolling upgrade in the works
echo 'release=V3' > dev.env
fleetappctl -e dev.env start
curl paas-monitor.127.0.0.1.xip.io:8080/status

fleetappctl destroy

Example - Env Consul Key Value Pair deployments


The final example shows the use of a Consul Key Value Pair, the use of placeholders and envconsul to dynamically update the environment variables of a running instance. The environment variables RELEASE and MESSAGE are taken from the keys under /paas-monitor in Consul. In turn the initial value of these keys are loaded on first load and set using values from the placeholder file.

cd ../fleetappctl-0.25/examples/envconsul

#check out the Consul Key Value pairs, and notice the reference to placeholder values
cat keys.consul
...
paas-monitor/release={{release}}
paas-monitor/message={{message}}

# checkout our placeholder values
cat dev.env
...
release=V4
message=Hi guys
# start the app
fleetappctl -e dev.env start

# show current release and message in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# Change the message in Consul
fleetctl ssh paas-monitor@1 \
    curl -X PUT \
    -d \'hello Consul\' \
    http://172.17.8.101:8500/v1/kv/paas-monitor/message

# checkout the changed message
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start does not change the values..
fleetappctl -e dev.env start

Conclusion

CoreOS provides all the basic functionality for a Container Platform as a Service. With the utility fleetappctl it becomes easy to start, stop and upgrade composite applications. The script is an superfluous to fleetctl and does not break other ways of deploying your applications to CoreOS.

The source code, manual page and documentation of fleetappctl can be found on https://github.com/mvanholsteijn/fleetappctl.

 

How to deploy High Available persistent Docker services using CoreOS and Consul

Mark van Holsteijn

Providing High Availability to stateless applications is pretty trivial as was shown in the previous blog posts A High Available Docker Container Platform and Rolling upgrade of Docker applications using CoreOS and Consul. But how does this work when you have a persistent service like Redis?

In this blog post we will show you how a persistent service like Redis can be moved around on machines in the cluster, whilst preserving the state. The key is to deploy a fleet mount configuration into the cluster and mount the storage in the Docker container that has persistent data.

 Read more

Rolling upgrade of Docker applications using CoreOS and Consul

Mark van Holsteijn

In the previous blog post we showed you how to setup a High Available Docker Container Application platform using CoreOS and Consul. In this short blog post, we will show you how easy it is to perform a rolling upgrade of a deployed application.

(Updated - june 2nd 2015 - removed consul-http-loadbalancer-lb from the Vagrant architecture)
 Read more

How to deploy a Docker application into production on Amazon AWS

Mark van Holsteijn

Docker reached production status a few months ago. But having the container technology alone is not enough. You need a complete platform infrastructure before you can deploy your docker application in production. Amazon AWS offers exactly that: a production quality platform that offers capacity provisioning, load balancing, scaling, and application health monitoring for Docker applications.

In this blog, you will learn how to deploy a Docker application to production in five easy steps.

 Read more

Deploying a Node.js app to Docker on CoreOS using Deis

Mark van Holsteijn

The world of on-premise private PaaSes is changing rapidly. A few years ago, we were building on on-premise private PaaSes based upon the existing infrastructure and using Puppet as an automation tool to quickly provision new application servers.  We provided a self-service portal where development teams could get any type of server in any type of environment running within minutes.  We created a virtual server for each application to keep it manageable, which of course is quite resource intensive.

Since June 9th, Docker has been declared production ready, so this opens  the option of provisioning light weight containers to the teams instead of full virtual machine. This will increase the speed of provisioning even further while reducing the cost of creating a platform and minimising resource consumption.

To illustrate how easy life is becoming, we are going to deploy an original CloudFoundry node.js application to Docker on a CoreOS cluster. This hands-on experiment is based on MacOS, Vagrant and VirtualBox.
 Read more