Fast feedback loops are instrumental to gaining confidence in changes and achieving a steady pace of delivery. In many teams, Docker has been an important force behind removing delays in the pipeline to production. Taking control of your environments is a powerful move to make as a scrum team.

Seeing that Kubernetes appears to be becoming the leading orchestration platform, chances are that the containerized applications you and your team are working on will land on a Kubernetes cluster. That is why I was excited to see that in the latest version of Docker it is now possible to run a local Kubernetes cluster. In this blog you will learn how to start a local Kubernetes cluster with the latest Docker version.

Installing Docker Edge

Docker with Kubernetes is currently only available on Docker for Mac in the latest Edge version. Download the installer from the Docker store. Close your currently running Docker daemon if necessary and install the Edge version.

Attention: Switching from Stable to Edge will result in losing all your containers and images!

Enabling the local Kubernetes cluster

Click the Docker icon in the status bar, go to “Preferences”, and on the “Kubernetes”-tab, check “Enable Kubernetes”. This will start a single node Kubernetes cluster for you and install the kubectl command line utility. This might take a while, but the dialog will let you know once the Kubernetes cluster is ready.

Before you continue: if you’ve previously used kubectl, you may have to switch the context to your local cluster. Run the following command:

kubectl config use-context docker-for-desktop

Running our first workload

No point in running a Kubernetes cluster if we don’t put it to work, right? Let’s start with deploying the Kubernetes Dashboard UI, which is a Kubernetes workload in itself.

kubectl create -f

This should create all the necessary objects for the UI to run properly, which you can check by running kubectl proxy, followed by visiting


Running Kubernetes through Docker is an interesting option because it removes the need for the minikube virtual machine and the separate minikube binary to manage that. In some cases, it might even lessen the need for a cluster running in the cloud, reducing costs. But, the biggest benefit is, of course, is having a more production-like environment at the tip of your fingers in a few seconds.

Further reading