Monitor Your Mesos Cluster with StackState

This post is part 2 in a 4-part series about Container Monitoring. Post 1 dives into some of the new challenges containers and microservices create and the information you should focus on. This article describes how to monitor your Mesos cluster.

Apache Mesos is a distributed systems kernel at the heart of the Mesosphere DC/OS and is designed for operations at very large scale. It abstracts the entire data center into a single pool of computing resources, simplifying running distributed systems at scale.

Mesos supports different types of workloads to build a truly modern application. These distributed workloads include container orchestration (like Mesos containers, Docker and Kubernetes), analytics (Spark), big data technologies (Kafka and Cassandra) and much more.

>>> Read the full article right here.

Docker container secrets on AWS ECS

Almost every application needs some kind of a secret or secrets to do it's work. There are all kind of ways to provide this to the containers but it all comes down to the following five:

  1. Save the secrets inside the image
  2. Provide the secrets trough ENV variables
  3. Provide the secrets trough volume mounts
  4. Use a secrets encryption file
  5. Use a secrets store

Read more →

TDD is not about unit tests

-- Dave Farley & Arjan Molenaar

On many occasions when we come at a customer, we're told the development team is doing TDD. Often, though, a team is writing unit tests, but it's not doing TDD.

This is an important distinction. Unit tests are useful things. Unit testing though says nothing about how to create useful tests that can live alongside your code. On the other hand TDD is an essential practice for improving the design of your code. These are very different things.

TDD stands for Test Driven Development. It's a verb, something you do. TDD has to do with development. The act of designing and writing software. Unit test is a noun. It's an artefact of software development.

As a matter of coincidence, one of the main tools used in the practice of TDD is a unit test framework. So perhaps it is not surprising that people get confused.

A unit test framework (such as jUnit, ScalaTest, Jasmine) allows you to execute small bits of code quickly and efficiently. I purposely do not mention "test" here.

So we have three different things:

* TDD: a design process
* Unit test: a fine grained test case
* Unit test framework: a library and additional tooling for executing small bits of code

When writing code using TDD, you follow a "red-green-refactor" cycle.

First you write a test, there is that annoying word again! Then you run the test to see it fail. This is the "Red" state. When the test fails most frameworks highlight the failure in red, hence the name. Running the test at this point may seem odd, but what this allows us to do is to "test the test". You run the test to check that it fails in the way that you expected. If it doesn't you have made a mistake somewhere.

Next you write just enough code to make the test pass and you run the test again to prove it - "Green"!

There is some subtlety to this, guidelines to help keep your code clean. Do the minimum to make the test pass, even if that minimum seem naive.

Finally, refactor both the code and the test to make them as clean, simple and readable as possible. Then, just to be on the safe-side, run the test again to make sure that you didn't break anything while tidying-up.

So, in the "red" state, you're writing a test. In the "green" state you've implemented just enough code to make the test pass and in the "refactor" state you tidy up your code, ready for the next iteration.

This red-green-refactor idea is central to TDD. If you don't follow it, you aren't doing TDD!

If you write the tests after you wrote the code, not TDD!

The reason that these distinctions matter is because this is where the significant value of TDD, way beyond the value of unit-testing, comes from.

So what is that "significant value"?

TDD allows us to create higher-quality code. But then again, what defines "high-quality" in code?

I would argue that high quality code is modular, loosely-coupled, has high-cohesion, good separation of concerns and exhibits information-hiding. You may be able to think of other properties of high-quality, but these attributes are certainly in the list of the defining characteristics.

What drivers are there to help us achieve high-quality in code? Before TDD the only drivers were the experience, skill and commitment of an individual software developer.

Let's think about the mechanical process of TDD for a moment. We write a test that specifies some desirable behaviour of our system. We do that before we have written the code to fulfil the behavioural goals of the test. This means that the test can't be tightly-coupled to the implementation, because there isn't an implementation yet. In addition this gives us the ability to think about the functionality before we think about the implementation. Further, if we are writing a test to assert some behaviour of the system, we would have to be pretty dumb to write a test that can't assert that behaviour -- e.i. there should be some observable result. This outside-in approach to design drives the code in some well-defined directions.

Code that is "testable" in the TDD sense is modular, loosely-coupled, has high-cohesion, good separation of concerns and exhibits information-hiding. Sound familiar?

So now in addition to the skills and experience of a software developer we have a process that applies a pressure on us to design higher-quality code.

TDD acts as an amplifier for the skills of any software developer.

This is the magic of TDD.

Unit tests have a place. They tend to be somewhat more coarse grained than those written using TDD. They can be useful, but most organisations that do lots of unit tests see some common problems. Tests written after the code-under-test tend to be much more tightly-coupled to it. As a result software that is well unit tested is often difficult to change, because to change it you need also change the tests. TDD leads you to create tests that are naturally more loosely-coupled to the code-under-test and so helps to alleviate this problem.

Writing the tests first, gives opportunity to think about the problem domain in a non-ambiguous language (the programming language) and think about the interface that has to be exposed from the client standpoint. These are not really tests at all, these are "executable specifications" for the behaviour of our code.

Refactoring both the code-under-test and the test code means that we can maintain this loose-coupling. It also means that we can ensure that our "executable specifications" are as clear and understandable as we can make them, to make the intent of our design clear. The value of these "specifications" is enormous, one useful side-benefit is that they exist as unit-tests (noun) so while the focus of TDD is not testing, we get great testing as a secondary benefit. We say secondary, because the benefit on the quality of design significantly outweighs the usefulness of even a good suite of regression tests.

Lastly, what does it take to keep the unit tests maintainable? After the tests have been written, they have to be maintained for the lifetime of the product. If tests just relate to the technical implementation of the application (tightly-coupled to how it's implemented), they are bound to fail when the code changes. Instead, if unit tests created as specifications in the process of TDD describe the behaviour (functionally, the *what*), the tests only fail when there is a change in function.

TDD is the best way to improve the quality of your code.

Want to know more about TDD? Check out Dave Farley's TDD training on

The Container Monitoring Problem

This post is part 1 in a 4-part series about Docker, Kubernetes and Mesos monitoring. This article dives into some of the new challenges containers and microservices create and the metrics you should focus on.

Containers are a solution to the problem of how to get software to run reliably when moved from one environment to another. It’s a lightweight virtual machine with a purpose to provide software isolation.

So why are containers such a big deal?

Containers simply make it easier for developers and operators to know that their software will run, no matter where it is deployed. We see companies moving from physical machines, to virtual machines and now to containers. This shift in architecture looks very promising, but in reality you might introduce problems you didn’t see coming.

Read the full article on

Caveats and pitfalls of cookie domains

Not too long ago, we ran into an apparent security issue at my current assignment - people could sign in with a regular account, but get the authentication and permissions of an administrator user (a privilege escalation bug). As it turned out, the impact  of the security issue was low, as the user would need to be logged in as an admin user already, but it was a very confusing issue. In this post I’ll try and explain the situation, how browsers handle wildcard subdomain cookies, and what to keep in mind when building an authentication back-end when it comes to cookies storing session information.

Read more →

The secret to making people buy your product

There is no greater waste than building something extremely efficient, well architectured (is that a word?), with high quality that nobody wants.

Yet we see it all the time. We have the Agile manifesto and Scrum probably to thank for that (the seeing bit.) “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”. It’s the valuable bit that is embodied by the Product Owner in Scrum, or “the value maximiser”.

Lean Startup has taught us that we suffer from cognitive bias and simply assume we know what customers want, and therefor should treat our requirements as assumptions. Get out of the building and ask our customers! We all know that Henry Ford would disagree. But could both be right.

Read more →

Deep dive into Windows Server Containers and Docker – Part 2 – Underlying implementation of Windows Server Containers

With the introduction of Windows Server 2016 Technical Preview 3 in August 2015, Microsoft enabled the container technology on the Windows platform. While Linux had its container technology since August 2008 such functionality was not supported on Microsoft operating systems before. Thanks to the success of Docker on Linux, Microsoft decided almost 3 years ago to start working on a container implementation for Windows. Since September 2016 we are able to work with a public released version of this new container technology in Windows Server 2016 and Windows 10. But what is the difference between containers and VMs? And how are Windows containers implemented internally within the Windows architecture? In this blogpost we’ll dive into the underlying implementation of containers on Windows.

Read more →

Automate incident investigation to save money and become proactive

How many hours did your best engineers spent investigating incidents and problems last month? Do those engineers get a big applause when they solved the issue? Most likely the answers are “a lot” and “yes”…

The reason that problem and incident investigation is hard, is because usually you have to search through multiple tools, correlate data from all those tools and interpret this data.

Click here to read the full post.

Fixing “HNS failed with error : Unspecified error” on docker-compose for Windows

The past few days I worked quite a lot with docker-compose on my windows machine and after something strange happened to my machine that crashed it, I was not able to start any containers anymore that had connectivity over the network with each other.

Every time I used the command-line docker-compose up, I would get a message telling me it failed to start the container. the full message I got was:

“ERROR: for web  Cannot start service web: failed to create endpoint aspnetblogapplication_web_1 on network nat: HNS failed with error : Unspecified error”

Read more →

Share This