Introduction

Docker containers are the most popular containerisation technology. Used properly can increase level of security (in comparison to running application directly on the host). On the other hand some misconfigurations can lead to downgrade level of security or even introduce new vulnerabilities.

The aim of this cheat sheet is to provide an easy to use list of common security mistakes and good practices that will help you securing your Docker containers.

Rules

RULE #0 - Keep Host and Docker up to date

To prevent from known, container escapes vulnerabilities, which typically ends in escalating to root/administrator privileges, patching Docker Engine and Docker Machine is crucial.

In addition containers (unlike in a virtual machines) share kernel with the host, therefore kernel exploit runned inside the container will directly hit host kernel. For example kernel privilege escalation exploit (like Dirty COW) runned inside well insulated container will result in root access in a host.

RULE #1 - Do not expose the Docker daemon socket (even to the containers)

Docker socket /var/run/docker.sock is the UNIX socket that Docker is listening to. This is primary entry point for the Docker API. The owner of this socket is root. Giving someone access to it is equivalent to giving a unrestricted root access to your host.

Do not enable tcp Docker daemon socket. If you are running docker daemon with -H tcp://0.0.0.0:XXX or similar you are exposing un-encrypted and un-authenticated direct access to the Docker daemon. If you really, really have to do this you should secure it. Check how to do this following Docker official documentation.

Do not expose /var/run/docker.sock to other containers. If you are running your docker image with -v /var/run/docker.sock://var/run/docker.sock or similar you should change it. Remember that mounting the socket read-only is not a solution but only makes it harder to exploit. Equivalent in docker-compose file is somethink like this:

  1. volumes:
  2. - "/var/run/docker.sock:/var/run/docker.sock"

RULE #2 - Set a user

Configuring container, to use unprivileged user, is the best way to prevent privilege escalation attacks. This can be accomplished in three different ways:

  • During runtime using -u option of docker run command e.g.:
  1. docker run -u 4000 alpine
  • During build time. Simple add user in Dockerfile and use it. For example:
  1. FROM alpine
  2. RUN groupadd -r myuser && useradd -r -g myuser myuser
  3. <HERE DO WHAT YOU HAVE TO DO AS A ROOT USER LIKE INSTALLING PACKAGES ETC.>
  4. USER myuser

In kubernetes this can be configured in Security Context using runAsNonRoot field e.g.:

  1. kind: ...
  2. apiVersion: ...
  3. metadata:
  4. name: ...
  5. spec:
  6. ...
  7. containers:
  8. - name: ...
  9. image: ....
  10. securityContext:
  11. ...
  12. runAsNonRoot: true
  13. ...

As an Kubernetes cluster administrator you can for it using Pod Security Policies.

RULE #3 - Limit capabilities (Grant only specific capabilities, needed by a container)

Linux kernel capabilities are set of privileges that can be used by privileged. Docker, by default, runs with only a subset of capabilities. You can change it and drop some capabilities (using —cap-drop) to harden your docker containers, or add some capabilities (using —cap-add) if needed. Remember not to run containers with the —privileged flag - this will add ALL Linux kernel capabilities to the container.

The most secure setup is to drop all capabilities —cap-drop all and then add only required ones. For example:

  1. docker run --cap-drop all --cap-add CHOWN alpine

And remember: Do not run containers with the —privileged flag!!!

In kubernetes this can be configured in Security Context using capabilities field e.g.:

  1. kind: ...
  2. apiVersion: ...
  3. metadata:
  4. name: ...
  5. spec:
  6. ...
  7. containers:
  8. - name: ...
  9. image: ....
  10. securityContext:
  11. ...
  12. capabilities:
  13. drop:
  14. - all
  15. add:
  16. - CHOWN
  17. ...

As an Kubernetes cluster administrator you can for it using Pod Security Policies.

RULE #4 - Add –no-new-privileges flag

Always run your docker images with —security-opt=no-new-privileges in order to prevent escalate privileges using setuid or setgid binaries.

In kubernetes this can be configured in Security Context using allowPrivilegeEscalation field e.g.:

  1. kind: ...
  2. apiVersion: ...
  3. metadata:
  4. name: ...
  5. spec:
  6. ...
  7. containers:
  8. - name: ...
  9. image: ....
  10. securityContext:
  11. ...
  12. allowPrivilegeEscalation: false
  13. ...

As an Kubernetes cluster administrator you can for it using Pod Security Policies.

RULE #5 - Disable inter-container communication (—icc=false)

By default inter-container communication (icc) is enabled - it means that all containers can talk with each other (using docker0 bridged network).This can be disabled by running docker daemon with —icc=false flag. If icc is disabled (icc=false) it is required to tell which containers can communicate using —link=CONTAINER_NAME_or_ID:ALIAS option. See more in Docker documentation - container communication

In Kubernetes Network Policies can be used for it.

RULE #6 - Use Linux Security Module (seccomp, AppArmor, or SELinux)

First of all do not disable default security profile!

Consider using security profile like seccomp or AppArmor.

Instructions how to do this inside Kubernetes can be found in Security Context documentation and in Kubernetes API documentation

RULE #7 - Limit resources (memory, CPU, file descriptors, processes, restarts)

The best way to avoid DoS attacks is limiting resources. You can limit memory, CPU, maximum number of restarts (—restart=on-failure:<number_of_restarts>), maximum number of file descriptors (—ulimit nofile=<number>) and maximum number of processes (—ulimit nproc=<number>).

Check documentation for more details about ulimits

You can also do this inside Kubernetes: Assign Memory Resources to Containers and Pods, Assign CPU Resources to Containers and Pods and Assign Extended Resources to a Container

RULE #8 - Set filesystem and volumes to read-only

Run containers with a read-only filesystem using —read-only flag. For example:

  1. docker run --read-only alpine sh -c 'echo "whatever" > /tmp'

If application inside container have to save something temporarily combine —read-only flag with —tmpfs like this:

  1. docker run --read-only --tmpfs /tmp alpine sh -c 'echo "whatever" > /tmp/file'

Equivalent in docker-compose file will be:

  1. version: "3"
  2. services:
  3. alpine:
  4. image: alpine
  5. read_only: true

Equivalent in kubernetes in Security Context will be:

  1. kind: ...
  2. apiVersion: ...
  3. metadata:
  4. name: ...
  5. spec:
  6. ...
  7. containers:
  8. - name: ...
  9. image: ....
  10. securityContext:
  11. ...
  12. readOnlyRootFilesystem: true
  13. ...

In addition if volume is mounted only for reading mount them as a read-onlyIt can be done by appending :ro to the -v like this:

  1. docker run -v volume-name:/path/in/container:ro alpine

Or by using —mount option:

  1. $ docker run --mount source=volume-name,destination=/path/in/container,readonly alpine

RULE #9 - Use static analysis tools

To detect containers with known vulnerabilities - scan images using static analysis tools.

To detect misconfigurations in Kubernetes:

To detect misconfigurations in Docker:

RULE #10 - Set the logging level to at least INFO

By default, the Docker daemon is configured to have a base logging level of 'info', and if this is not the case: set the Docker daemon log level to 'info'. Rationale: Setting up an appropriate log level, configures the Docker daemon to log events that you would want to review later. A base log level of 'info' and above would capture all logs except debug logs. Until and unless required, you should not run docker daemon at 'debug' log level.

To configure the log level in docker-compose:

  1. $ docker-compose --log-level info up

References:

Related Projects

OWASP Docker Top 10