Security is probably one of the biggest subjects when it comes to containers. Developers love containers, some ops do as well. But it most of the time boils down to the security aspects of containers. Is it safe to use, what if someone breaks out? The characteristics of containers which we love, could also be a weak spot when it comes to security. In this blog I want to show some common methods to establish a defence in depth around your containers. This is container-specific, so I won't be talking about locking down the host nodes or reducing the attack surface i.e. by disabling Linux daemons.

Read-only containers (Docker 1.5)
First up, the possibility to run a read-only container. By specifying --read-only, the container's rootfs will be mounted read-only so no process inside the container can write to it. This means when you have a vulnerability inside your app allowing to upload files, this is blocked by marking the containers rootfs as read-only. This will also block applications to log to the rootfs, so you may want to use a remote logging mechanism or a volume for this.

Usage (docs):
$ docker run --read-only -v /icanwrite busybox touch /icanwrite here

User-namespaces (Experimental)

Lots of people are waiting for this one to land in stable. Currently, being root in the container will mean you are also root on the host. If you are able to mount /bin inside your container, you can add whatever you want in there, and possible take over the host system. With the introduction of user-namespaces, you will be able to run containers where the root user inside the container will still have privileged capabilities but outside the container the uid:gid will be remapped to a non-privileged user/group. This is also know as phase 1, remapped root per daemon instance. A possible next phase could be full maps and per container mapping, but this is still under debate.

Usage (docs):

$ docker daemon --userns-remap=default

Seccomp (Git master branch)

With namespaces we have separation, but we also would like to control what can happen inside a running container. That's where seccomp comes into play. Seccomp is short for secure computing mode. It allows you to filter syscalls, so you define the syscalls your application needs, and all the other will be denied. A quick example, given socket.json:

"defaultAction": "SCMP_ACT_ALLOW",
"syscalls": [
"name": "socket",
"action": "SCMP_ACT_ERRNO"

will result in the following:

# docker run -ti --rm --security-opt seccomp:tcpsocket.json ubuntu bash
root@54fd6641a219:/# nc -l 555
nc: Operation not permitted

Project Nautilus

One of the missing pieces in the eco-system was checking image contents. There was a great buzz around this when an article was published stating that there were common vulnerabilities in over 30% of the official images on the Docker hub. Docker got to work, and have been scanning a lot of official images on the background on the Docker Hub before they published anything about it. During Dockercon EU, they announced Project Nautilus, an image-scanning service from Docker that makes it easier to build and consume high-integrity content.

There is not a lot official about Nautilus yet, we know it has been running in the background and Docker says they secured over 74 million pulls with it. Recently, they created a survey asking questions about how it could be used so I can only give you some assumptions. First up, what Docker says it does:

  • Image security
  • Component inventory/license management
  • Image optimization
  • Basic functional testing

Here are some pointers on things that may be coming soon:

  • Running Nautilus on-premise
  • Pricing may be on per image or per deployed node


AppArmor profiles

By using AppArmor you can restrict capabilities by using profiles. The profiles can be really fine-grained, but a lot of people don't want to take the time to write these profiles. These profiles can be used on running Docker containers. Jessie Frazelle, one of the core maintainers of Docker, created bane to make writing these profiles easier. It can use a toml input file, and will generate and install an AppArmor profile. This profile can then be used on running a Docker container, in the same syntax as stated before:

docker run -d --security-opt="apparmor:name_of_profile" -p 80:80 nginx

Docker Security Profiles

These are all parts to secure your containers, of course Docker is also working on making this as easy to use as possible. This means  If you want to know more on where this is heading, check out the proposal on Github to keep yourself up-to-date.