Infrastructure as Code and VSTS

Your team is in the process of developing a new application feature, and the infrastructure has to be adapted. The first step is to change a file in your source control system that describes your infrastructure. When the changed definition file is saved in your source control system it triggers a new build and release. Your new infrastructure is deployed to your test environment, and the whole process to get the new infrastructure deployed took minutes while you only changed a definition file and you did not touch the infrastructure itself.

Does this sound like a dream? It is called Infrastructure as Code. In this article we will explain what Infrastructure as Code (IaC) is, the problems it solves and how to apply it with Visual Studio Team Services (VSTS).

Read more →

Microservices, not so much news after all?

A while ago at Xebia we tried to streamline our microservices effort. In a kick-off session, we got quite badly side tracked (as is often the case) by a meta discussion about what would be the appropriate context and process to develop microservices. After an hour of back-and-forth, we reached consensus that might be helpful to place a topic like microservices in a larger perspective. Below I’ll summarize my views on how to design robust microservices: start with the bigger picture, take time designing a solution, then code your services.

read more

Keep your ARM deployment secrets in the Key Vault

When creating new resource in Azure that have secrets like passwords or ssl certificates you can securely save them in the Key Vault and get them from the Key Vault when you deploy. Only the people who need access to the secrets can read and write them to the Key Vault. In a infrastructure as code scenario the secrets are supplied when deploying your templates to Azure. The code it self will be free of secrets.

Read more →

Conditional parts in ARM Templates

When creating reusable ARM templates you have a number of options on how to manage conditional parts in your templates. The smallest conditions can be done by parameters, medium differences can be done by  t-shirt sizes and large differences by linked templates. In this blog post I’ll show how to use implement conditions by linked templates.

Making conditions with linked templates
From one template in resource manager you can link to an other template. This enables you to decompose a large template into smaller more maintainable templates. The linking is done by the template type Microsoft.Resources/deployments. This template contains a property templateLink with the uri to the actual template.

Read more →

Nomad 0.5 configuration templates: consul-template is dead! long live consul-template!

Or... has Nomad made the Consul-template tool obsolete?

If you employ Consul or Vault to provide service discovery or secrets management to your applications you will love the freshly released 0.5 version of the Nomad workload scheduler: it includes a new 'template' feature to dynamically generate configuration files from Consul and Vault data for the jobs it runs. Bundling Consul-template as a sidecar to your application is no longer necessary.

Nomad, Consul and Consul-template

A year ago Nomad 0.2 added support for automatic registration of jobs in Consul via a service configuration block. However the applications themselves still had to handle reading data from Consul. For this you had the following three options:Read more →

Deep dive into Windows Server Containers and Docker – Part 1 – Why should we care?

With the introduction of Windows Server 2016 Technical Preview 3 in August 2015, Microsoft enabled the container technology on the Windows platform. While Linux had its container technology since August 2008 such functionality was not supported on Microsoft operating systems before. Thanks to the success of Docker on Linux, Microsoft decided 2,5 years ago to start working on a container implementation for Windows. Currently we are able to test this new container technology on Windows Server 2016 and Windows 10.

Last September (2016) Microsoft finally announced that it released Windows Server 2016 to the public. But what does that mean for me as a developer or for us as an enterprise organisation? In this deep dive serie of blogposts we’re gonna look at the different aspects of working with Windows Containers, Docker and how containers will change the way we deliver our software. But first, in this first blogpost of this serie, we will answer the question why we should even care about containers…

Why should I care about software containers?

To explain the different advantages, we will reuse the metaphor of shipping containers. For that, we go back to the 26th of November, 1955. The day on which the first containership, the Clifford J. Rogers, was taken into service. A day which changed the course of world trade and laid the foundations for what was to become the biggest liner business in the world. But what was unique on this new containership approach? Or maybe a better question: what was the reason for the Vickers shipyard to introduce a new cargoship? In short: speed, costs, standardisation and isolation.
Read more →

Adding an Azure web app to an Application Service Environment running in another subscription

Web apps and Api apps  in Azure are great, however when using them you have to agree to have them connected to the internet directly without the possibility of adding a WAF or other kind of additional protection (next to the default Azure line of defense). When you want to add something like that you have to add an Internal Application Service Environment to host your apps so you can control the network access to these apps.

App Service

However adding an Application Service Environment is quite costly if you are only running a few apps in them. (Minimum requirements for an Application Service Environment are 2 P2’s and 2 P1’s to run the Application Service Environment (ASE)

In our case adding an ASE was fine except that we have a scenario where we have quite a lot of subscriptions and most of them are quite small running only a couple of apps in them. Adding an ASE for each subscription was going to become a bit to costly so we came up with the idea of creating 1 central subscription called “Shared Services” where we would host things that multiple departments could share such as WAF functionality, the VNet, the Express route and also the ASE.

After creating the design we ran in to some problems actually implementing it because we weren’t able to select an ASE in another subscription which was part of the same enterprise agreement when creating an App Service Plan or Web App in Azure.  After checking it seems that this is a limitation of the Azure Portal and we had to use ARM templates to create our web app. This didn’t matter because we were planning on using ARM templates anyway. so we started to give it a try.

At first we had some trouble adding the ASE as our hosting environment. we tried adding the “HostingEnvironment” to point to the name of the ASE in our other subscription but this did not work and we kept receiving errors like “Cannot find HostingEnvironment with name *HostingEnvironmentName*. (Code: NotFound)”

ASE erorr message

 

After that we tried to remove the “HostingEnvironment” property and only set the “HostingEnvironmentID” to directly link to the full resourceID of our ASE. this did get our hopes up because we were able to deploy the web app, however it was running on the P1’s that were part of the workerpool of our internal ASE but it still had a public dns name and was accessible from the internet. I guess we weren’t supposed to created it this way. so i asked help from the Microsoft product team and they pointed me to the right correction.

It all boils down to using a newer API version of the Web App and App Service Plan ARM template API than that are generated in visual studio when building ARM templates. we had to use apiVersion: 2015-08-01

in here we can set the “hostingEnvironmentProfile” to the full resourceID of our ASE for both the App Service Plan as the Web App. Next to that we also have to set the sku to the correct worker pool within our ASE.

Now when we try to deploy our ARM template it will actually create an App Service Plan and Web App in another subscription than where our ASE is running. Nice!

Hopefully this post will help you when you run in to the same problems i did when trying to deploy web apps in an ASE using ARM templates.

Happy Coding / Deploying

Geert van der Cruijsen

The post Adding an Azure web app to an Application Service Environment running in another subscription appeared first on Mobile First Cloud First.

Using docker on Windows in VSTS build and release management

In my previous post I showed you how you can create a docker container image that has an ASP.NET 4.5 website running on the full .NET framework. In this post I want to show you how you can use VSTS build V-next and the release management tools to leverage the docker technology.

Let us first start by creating a docker image as the result of our build, that we then later can use in the deployment pipeline to very easily run the website from the container and run some UI tests on them.

Creating the docker image that has the website, from the build

When we want to use docker as part of our build, then we need the build agent to run on a host that has the docker capabilities build in. For this we can use either windows 10 anniversary edition, or we can use windows server 2016 Technical preview 5. In my example I choose windows server 2016 TP5, since it is available from the azure gallery and gives a very simple setup. You choose the Windows server 2016 TP5 with containers as the base server and after you provision this in azure you download the build agent from VSTS and install it on the local server. After the agent is running, we can now create a build that contains several commands to create the container image.

First we need to ensure the build produces the required webdeploy package and accompanying artefacts and we place those in the artifact staging area. This is simply done by adding the following msbuild arguments to the build solution task that is part of a standard Visual Studio build template.

/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation=$(build.stagingDirectory)

Then after we are done creating the package and copying the files to the artifacts staging location, we then add an additional copy task that copies the docker files I described in my previous post to the artifacts staging directory, so they become part of the output of my build. I put the docker files in my Git repo, so they are part of every build and versioned.

image

After copying the docker files, we then add a couple of command line tasks that we will look at in more detail.

In the following screenshot you can see the additional build steps I used to make this work. The first extra command I added is the docker build command to build the image based on the dockerfile I described in my previous post. I just added the dockerfile to the Git repository as you normally do with all the infrastructure scripts you might have. The docker file contains the correct naming of the package that we produce in the build solution task.

backe image

You see me passing in the arguments like I did in the previous post, to give the image a tag that I can use later in my release pipeline to run the image.

You see I am using a variable $(GitVersion.NugetVersionV2). this variable is available to me because I use the task GitVersion that you can get from the marketplace. GitVersion determines the semantic version of the current build, based on your branch and changes and this is override able by using git commit messages. For more info on this task and how it works you can go here

Now after I create the image, I also want to be able to use it on any of my machines, by using the dockerhub as a repository for my images. So the next step is to login to dockerhub

login to dockerhub

After I have logged in to dockerhub, I can now push the newly created image to the repository.

push to dockerhunb

And now we are done with our build. Next is using the image in our release pipeline

Running a docker image in the release pipeline

Now I go to release management and create a new release definition. What I need to do to run the docker container, is I need the release agent to run on the docker capable machine, exactly the same as with the build. Next we can then issue commands on that machine to run the image and it will pull it from dockerhub when not found on the local machine. Here you can see the release pipeline with two environments, test and production.

release-dockerrun

As you can see the first step is nothing more then issuing the docker run command and mapping the port 80 of the container to 80 on the machine. We Also use the –detach option, since we don’t want the agent to be blocked on running the container, so this starts the container and releases control back to the release agent. I also pass it in a name, so I can use the same name in a later stage to stop the image and remove it.

Next I run a set of CodedUI tests to validate if my website is running as expected and then I use the following docker command to stop the container:

release-stopcontaienr

docker stop $(docker.processname)

the variable $(docker.processname) is just a variable I defined for this release template and just contains an arbitrary name that I can then use cross multiple steps.

Finally I am running the command to remove the container after use. This ensures I can run the pipeline again with a new image after the next build

release-removecontainer

For this I use the docker command:

docker rm –f $(docker.processname)

I used the –f flag,  and I set the task to always run, so I am guaranteed this image is removed and even after a non successful release. this ensures the repeatability of the process which is of course very important.

Summary

As you can see it is quite easy to build containers during the build and use them in the release pipeline. I now used simple command line tasks to do the job, I assume it is just a matter of time before we will see some docker specific tasks in the marketplace for us to use. Microsoft has docker tasks in the marketplace, but these are only targeting Linux (at the moment I am writing this) and require a Linux docker machine connection to work. In my example here I am focused on leveraging docker on Windows and for this I hope we can see tasks in the future.

Deploying ASP.NET 4.5 to Docker on Windows

At the moment of this writing you can search the internet on ASP.NET and docker and all you will find is how to deploy ASP.NET Core applications to a Linux docker container. Although I love the initiative of ASP.NET core, I do believe that ASP.NET 4.5 is something many of you know and love already and nobody talks about how we can leverage docker on windows to run this full version of ASP.NET

To get you started we need to have a Windows version that is capable of natively running docker. With natively running docker I mean that docker is build into the OS. So no use of docker for windows tools, since we don’t want Linux containers, we want to run windows containers! At this moment you can use Windows 10 Anniversary edition and Windows Server 2016 Technical Preview 5 to go through the steps that I describe here to get your ASP.NET 4.5 website running in a docker on windows container.

What do we need to rollout an ASP.NET website to a windows docker container?

When you run an ASP.NET 4.5 website then you need the following things:

  • The Operating system with IIS installed
  • ASP.NET 4.5 installed
  • Webdeploy installed

I personally love to use web deploy to deploy the website after build, so it can be done exactly the same way as you would deploy to Azure App Services or your local IIS Server on any server you already know and love.

Building the container with IIS, ASP.NET and Webdeploy

Here are the steps you need to take to create a docker container that has all these required ingredients:

Fist we need a basic operating system image from docker hub. For this you can run the following command from the command line:

docker pull microsoft/windowsservercore

now we have the image in our images gallery, you can check this with the following command:

docker images

This should output something similar to the following screenshot:

image

 

Now we can start adding the first layer and that is installing IIS. For this you can use the dism command on windows and pass it in the arguments to install the IIS webserver role to windows server core. You can do this at an interactive prompt or use the docker build command. I prefer the later and for this we create a dockerfile that contains the following statements:

FROM microsoft/windowsservercore 
RUN dism /online /enable-feature /all /featurename:iis-webserver /NoRestart

After saving the file under the name dockerfile without any extensions you run a command line to build the image:

docker build -t windowsserveriis .

The command tells docker to build an image, give it the tag windowsserveriis and use the current folder (denoted with the dot) as the context to build the image. this means that everything stated in the dockerfile is relative to that context. Note that you are only allowed to use lowercase characters for the tagename.

After running the command you now have a new docker image with the name windowsserveriis

If you now run the command:

docker images

you will see the new image available

image

We can take the next step and that is to install ASP.NET 4.5

We can do this in a similar way, by creating a docker file with the following commands:

FROM windowsserveriis
RUN  dism /online /enable-feature /featurename:IIS-ASPNET45

and again after saving the file you can run the command line to build the image:

docker build –t windowsserveriisaspnet .

Now we have an image that is capable of running an ASP.NET application. The next step is that we need webdepoy to be installed in the container. For this we need to download the installer for webdeploy and then issue an command that will install and wait for the installation to finish. We first download the installer in the same folder as the dockerfile and then we will add it to the image. In the following steps I assume you already downloaded the MSI (WebDeploy_2_10_amd64_en-US.msi) and have it in the same folder as the dockerfile. When installing the msi we will use msiexec and need to start a process that we can wait on to be done. If we would only run msiexec, then this command returns and runs in the background, making the container to exit, leaving us in an undefined state.

When you create the following dockerfile, you install webdeploy:

FROM windowsserveriisaspnet

RUN mkdir c:install

ADD WebDeploy_2_10_amd64_en-US.msi /install/WebDeploy_2_10_amd64_en-US.msi

WORKDIR /install

RUN powershell start-Process msiexec.exe -ArgumentList '/i c:installWebDeploy_2_10_amd64_en-US.msi /qn' -Wait

Note that we are using powershell start-process with the –wait option, so we wait for the installation to finish, before we commit the new layer.

Now run the docker command again to build the image using the new dockerfile:

docker build –t windowsserveriisaspnetwebdeploy .

Now we have an image that is capable to host our website in IIS and use webdeploy to install our website.

Doing it all in one dockerfile

In the previous steps we created a new docker file for each step. But it is probably better to do this in one file, batching all commands together leaving you with the same endstate. We can also optimize the process a bit, since Microsoft already provides an image called microsoft/iis that has the iis feature enabled. This means we can use that image as the base layer and skip the install of IIS.

The simplified docker file looks as follows:

FROM microsoft/iis
RUN dism /online /enable-feature /all /featurename:iis-webserver /NoRestart
RUN mkdir c:install
ADD WebDeploy_2_10_amd64_en-US.msi /install/WebDeploy_2_10_amd64_en-US.msi
WORKDIR /install
RUN powershell start-Process msiexec.exe -ArgumentList '/i c:installWebDeploy_2_10_amd64_en-US.msi /qn' -Wait

Now again we run the docker build command to get the docker image capable of running our website and use the webdeploy packages that can be produced by a standard ASP.NET build procedure.

docker build –t windowsserveriisaspnetwebdeploy .

The final step is to deploy your webdeploy package to the image.

Getting the webdeploy package

Now before we can deploy our website we need to get the webdeploy package.

I assume you have a standard ASP.NET web project in Visual Studio. In this case you can very easily create the deploy package inside Visual Studio (in the next post I show you how to do this using VSTS/TFS builds)

When you right click the Visual Studio project you can select the publish option:

image

After selecting publish you will see the following dialog:

image

In order to just create a package in stead of deploying to a server or Azure, I select Custom

image

then you give the profile a name, in my case dockerdeploydemo

image

then we select web deploy package from the dropdown and provide the required information, package location and the name of the website

image

next you can setup any database connections if you have any, in my case I have no database

image

next, click publish and you will find the resulting deployment package and accompanying deployment files in the c:temp folder

image

Now that we have the webdeploy package and the accompanying deployment artifacts, we can again create a docker file that will then upload the package to the container and install the website in the container. This will then leave you with a complete docker image that runs your website.

Publish the website in the docker container

The dockerfile to deploy your website looks as follows:

FROM windowsserveriisaspnetwebdeploy 

RUN mkdir c:webapplication

WORKDIR /webapplication

ADD dockerdeploydemo.zip  /webapplication/dockerdeploydemo.zip

ADD dockerdeploydemo.deploy.cmd /webapplication/dockerdeploydemo.deploy.cmd
ADD dockerdeploydemo.SetParameters.xml /webapplication/dockerdeploydemo.SetParameters.xml

RUN dockerdeploydemo.deploy.cmd, /Y

We build the container again using the docker build command:

docker build –t mycontainerizedwebsite .

This now finally results in our web application in a container that we can then run on any windows server that has windows containers enabled.

Running the website in the container

In order to test if we succeeded we now issue the docker run command and then map the container port 80 to a port on our host. This can be done by using the –p option, where you specify a source and destination port. We also need to specify a command that ensures the container keeps running. For this we now use e.g. a command like ping –t which will result in an endless ping loop, that is enough to keep the container running. so to test the container we now run the following command:

docker run –p 80:80 mycontainerizedwebsite ping localhost -t

Now we can browse to the website. Be aware that you can only reach the container from the outside, so if you would browse to localhost, which results in the 127.0.0.0 you will not see any results. You need to address your machine on its actual hostname or outside IP address.

Summary

To summarize what we have done, we first created a docker image capable of running IIS, then we added ASP.NET 4.5, then we added webdeploy and finally we deployed our website to the container using webdeploy and the package generated by Visual Studio.

In the next post I will show you how we can use this image in build and release management using VSTS and then deploy the container to a server so we can run automated tests as a stage in the delivery pipeline.

Using Docker tools for Visual Studio with a Hyper-V based Docker host

In the past few weeks I’ve been playing around with containerizing an ASP.NET Core application using the Docker tools for Visual Studio. This allows you to develop and debug your app locally inside a Docker container. To do this, you’ll need a local Docker host. While you could ask your IT department to provide one for you, I found it much more convenient to run a virtual machine locally on my laptop, so I have it available everywhere I go. To create a local Docker host, you need to use the Docker Toolbox. This will use VirtualBox to create a local virtual machine which will serve as your Docker host. However, I already had Hyper-V installed as a virtualization hypervisor. Hyper-V works great on Windows 10, so I wanted to keep that. Sadly, VirtualBox doesn’t play nice with Hyper-V (in short, VirtualBox won’t install if Hyper-V is enabled).

Read more →