Lock Azure resources to prevent accidental deletion

In some cases you want to protect critical resources from accidental deletion. Some examples are a storage account with source data for processing, a Key Vault with disk encryption keys, or another key component in your infrastructure. When losing some resources that are key in your infrastructure, recovery can be dramatic. Resource Manager locks will enable you to protect these critical resources from deletion.

Read more →

Infrastructure as Code and VSTS

Your team is in the process of developing a new application feature, and the infrastructure has to be adapted. The first step is to change a file in your source control system that describes your infrastructure. When the changed definition file is saved in your source control system it triggers a new build and release. Your new infrastructure is deployed to your test environment, and the whole process to get the new infrastructure deployed took minutes while you only changed a definition file and you did not touch the infrastructure itself.

Does this sound like a dream? It is called Infrastructure as Code. In this article we will explain what Infrastructure as Code (IaC) is, the problems it solves and how to apply it with Visual Studio Team Services (VSTS).

Read more →

Microservices, not so much news after all?

A while ago at Xebia we tried to streamline our microservices effort. In a kick-off session, we got quite badly side tracked (as is often the case) by a meta discussion about what would be the appropriate context and process to develop microservices. After an hour of back-and-forth, we reached consensus that might be helpful to place a topic like microservices in a larger perspective. Below I’ll summarize my views on how to design robust microservices: start with the bigger picture, take time designing a solution, then code your services.

read more

Keep your ARM deployment secrets in the Key Vault

When creating new resource in Azure that have secrets like passwords or ssl certificates you can securely save them in the Key Vault and get them from the Key Vault when you deploy. Only the people who need access to the secrets can read and write them to the Key Vault. In a infrastructure as code scenario the secrets are supplied when deploying your templates to Azure. The code it self will be free of secrets.

Read more →

Conditional parts in ARM Templates

When creating reusable ARM templates you have a number of options on how to manage conditional parts in your templates. The smallest conditions can be done by parameters, medium differences can be done by  t-shirt sizes and large differences by linked templates. In this blog post I’ll show how to use implement conditions by linked templates.

Making conditions with linked templates
From one template in resource manager you can link to an other template. This enables you to decompose a large template into smaller more maintainable templates. The linking is done by the template type Microsoft.Resources/deployments. This template contains a property templateLink with the uri to the actual template.

Read more →

Nomad 0.5 configuration templates: consul-template is dead! long live consul-template!

Or... has Nomad made the Consul-template tool obsolete?

If you employ Consul or Vault to provide service discovery or secrets management to your applications you will love the freshly released 0.5 version of the Nomad workload scheduler: it includes a new 'template' feature to dynamically generate configuration files from Consul and Vault data for the jobs it runs. Bundling Consul-template as a sidecar to your application is no longer necessary.

Nomad, Consul and Consul-template

A year ago Nomad 0.2 added support for automatic registration of jobs in Consul via a service configuration block. However the applications themselves still had to handle reading data from Consul. For this you had the following three options:Read more →

Deep dive into Windows Server Containers and Docker – Part 1 – Why should we care?

With the introduction of Windows Server 2016 Technical Preview 3 in August 2015, Microsoft enabled the container technology on the Windows platform. While Linux had its container technology since August 2008 such functionality was not supported on Microsoft operating systems before. Thanks to the success of Docker on Linux, Microsoft decided 2,5 years ago to start working on a container implementation for Windows. Currently we are able to test this new container technology on Windows Server 2016 and Windows 10.

Last September (2016) Microsoft finally announced that it released Windows Server 2016 to the public. But what does that mean for me as a developer or for us as an enterprise organisation? In this deep dive serie of blogposts we’re gonna look at the different aspects of working with Windows Containers, Docker and how containers will change the way we deliver our software. But first, in this first blogpost of this serie, we will answer the question why we should even care about containers…

Why should I care about software containers?

To explain the different advantages, we will reuse the metaphor of shipping containers. For that, we go back to the 26th of November, 1955. The day on which the first containership, the Clifford J. Rogers, was taken into service. A day which changed the course of world trade and laid the foundations for what was to become the biggest liner business in the world. But what was unique on this new containership approach? Or maybe a better question: what was the reason for the Vickers shipyard to introduce a new cargoship? In short: speed, costs, standardisation and isolation.
Read more →

Adding an Azure web app to an Application Service Environment running in another subscription

Web apps and Api apps  in Azure are great, however when using them you have to agree to have them connected to the internet directly without the possibility of adding a WAF or other kind of additional protection (next to the default Azure line of defense). When you want to add something like that you have to add an Internal Application Service Environment to host your apps so you can control the network access to these apps.

App Service

However adding an Application Service Environment is quite costly if you are only running a few apps in them. (Minimum requirements for an Application Service Environment are 2 P2’s and 2 P1’s to run the Application Service Environment (ASE)

In our case adding an ASE was fine except that we have a scenario where we have quite a lot of subscriptions and most of them are quite small running only a couple of apps in them. Adding an ASE for each subscription was going to become a bit to costly so we came up with the idea of creating 1 central subscription called “Shared Services” where we would host things that multiple departments could share such as WAF functionality, the VNet, the Express route and also the ASE.

After creating the design we ran in to some problems actually implementing it because we weren’t able to select an ASE in another subscription which was part of the same enterprise agreement when creating an App Service Plan or Web App in Azure.  After checking it seems that this is a limitation of the Azure Portal and we had to use ARM templates to create our web app. This didn’t matter because we were planning on using ARM templates anyway. so we started to give it a try.

At first we had some trouble adding the ASE as our hosting environment. we tried adding the “HostingEnvironment” to point to the name of the ASE in our other subscription but this did not work and we kept receiving errors like “Cannot find HostingEnvironment with name *HostingEnvironmentName*. (Code: NotFound)”

ASE erorr message

 

After that we tried to remove the “HostingEnvironment” property and only set the “HostingEnvironmentID” to directly link to the full resourceID of our ASE. this did get our hopes up because we were able to deploy the web app, however it was running on the P1’s that were part of the workerpool of our internal ASE but it still had a public dns name and was accessible from the internet. I guess we weren’t supposed to created it this way. so i asked help from the Microsoft product team and they pointed me to the right correction.

It all boils down to using a newer API version of the Web App and App Service Plan ARM template API than that are generated in visual studio when building ARM templates. we had to use apiVersion: 2015-08-01

in here we can set the “hostingEnvironmentProfile” to the full resourceID of our ASE for both the App Service Plan as the Web App. Next to that we also have to set the sku to the correct worker pool within our ASE.

Now when we try to deploy our ARM template it will actually create an App Service Plan and Web App in another subscription than where our ASE is running. Nice!

Hopefully this post will help you when you run in to the same problems i did when trying to deploy web apps in an ASE using ARM templates.

Happy Coding / Deploying

Geert van der Cruijsen

The post Adding an Azure web app to an Application Service Environment running in another subscription appeared first on Mobile First Cloud First.

Using docker on Windows in VSTS build and release management

In my previous post I showed you how you can create a docker container image that has an ASP.NET 4.5 website running on the full .NET framework. In this post I want to show you how you can use VSTS build V-next and the release management tools to leverage the docker technology.

Let us first start by creating a docker image as the result of our build, that we then later can use in the deployment pipeline to very easily run the website from the container and run some UI tests on them.

Creating the docker image that has the website, from the build

When we want to use docker as part of our build, then we need the build agent to run on a host that has the docker capabilities build in. For this we can use either windows 10 anniversary edition, or we can use windows server 2016 Technical preview 5. In my example I choose windows server 2016 TP5, since it is available from the azure gallery and gives a very simple setup. You choose the Windows server 2016 TP5 with containers as the base server and after you provision this in azure you download the build agent from VSTS and install it on the local server. After the agent is running, we can now create a build that contains several commands to create the container image.

First we need to ensure the build produces the required webdeploy package and accompanying artefacts and we place those in the artifact staging area. This is simply done by adding the following msbuild arguments to the build solution task that is part of a standard Visual Studio build template.

/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation=$(build.stagingDirectory)

Then after we are done creating the package and copying the files to the artifacts staging location, we then add an additional copy task that copies the docker files I described in my previous post to the artifacts staging directory, so they become part of the output of my build. I put the docker files in my Git repo, so they are part of every build and versioned.

image

After copying the docker files, we then add a couple of command line tasks that we will look at in more detail.

In the following screenshot you can see the additional build steps I used to make this work. The first extra command I added is the docker build command to build the image based on the dockerfile I described in my previous post. I just added the dockerfile to the Git repository as you normally do with all the infrastructure scripts you might have. The docker file contains the correct naming of the package that we produce in the build solution task.

backe image

You see me passing in the arguments like I did in the previous post, to give the image a tag that I can use later in my release pipeline to run the image.

You see I am using a variable $(GitVersion.NugetVersionV2). this variable is available to me because I use the task GitVersion that you can get from the marketplace. GitVersion determines the semantic version of the current build, based on your branch and changes and this is override able by using git commit messages. For more info on this task and how it works you can go here

Now after I create the image, I also want to be able to use it on any of my machines, by using the dockerhub as a repository for my images. So the next step is to login to dockerhub

login to dockerhub

After I have logged in to dockerhub, I can now push the newly created image to the repository.

push to dockerhunb

And now we are done with our build. Next is using the image in our release pipeline

Running a docker image in the release pipeline

Now I go to release management and create a new release definition. What I need to do to run the docker container, is I need the release agent to run on the docker capable machine, exactly the same as with the build. Next we can then issue commands on that machine to run the image and it will pull it from dockerhub when not found on the local machine. Here you can see the release pipeline with two environments, test and production.

release-dockerrun

As you can see the first step is nothing more then issuing the docker run command and mapping the port 80 of the container to 80 on the machine. We Also use the –detach option, since we don’t want the agent to be blocked on running the container, so this starts the container and releases control back to the release agent. I also pass it in a name, so I can use the same name in a later stage to stop the image and remove it.

Next I run a set of CodedUI tests to validate if my website is running as expected and then I use the following docker command to stop the container:

release-stopcontaienr

docker stop $(docker.processname)

the variable $(docker.processname) is just a variable I defined for this release template and just contains an arbitrary name that I can then use cross multiple steps.

Finally I am running the command to remove the container after use. This ensures I can run the pipeline again with a new image after the next build

release-removecontainer

For this I use the docker command:

docker rm –f $(docker.processname)

I used the –f flag,  and I set the task to always run, so I am guaranteed this image is removed and even after a non successful release. this ensures the repeatability of the process which is of course very important.

Summary

As you can see it is quite easy to build containers during the build and use them in the release pipeline. I now used simple command line tasks to do the job, I assume it is just a matter of time before we will see some docker specific tasks in the marketplace for us to use. Microsoft has docker tasks in the marketplace, but these are only targeting Linux (at the moment I am writing this) and require a Linux docker machine connection to work. In my example here I am focused on leveraging docker on Windows and for this I hope we can see tasks in the future.