Kata Containers with Cilium

Kata Containers is an open source project that provides a secure container runtime with lightweight virtual machines that feel and perform like containers, but provide stronger workload isolation using hardware virtualization technology as a second layer of defense. Kata Containers implements OCI runtime spec, just like runc that is used by Docker. Cilium can be used along with Kata Containers, using both enables higher degree of security. Kata Containers enhances security in the compute layer, while Cilium provides policy and observability in the networking layer.

This guide shows how to install Cilium along with Kata Containers. It assumes that you have already followed the official Kata Containers installation user guide to get the Kata Containers runtime up and running on your platform of choice but that you haven’t yet setup Kubernetes.

Note

This guide has been validated by following the Kata Containers guide for Google Compute Engine (GCE) and using Ubuntu 18.04 LTS with the packaged version of Kata Containers, CRI-containerd and Kubernetes 1.18.3.

Setup Kubernetes with CRI

Kata Containers runtime is an OCI compatible runtime and cannot directly interact with the CRI API level. For this reason, it relies on a CRI implementation to translate CRI into OCI. At the time of writing this guide, there are two supported ways called CRI-O and CRI-containerd. It is up to you to choose the one that you want, but you have to pick one.

Refer to the section Requirements for detailed instruction on how to prepare your Kubernetes environment and make sure to use Kubernetes >= 1.12. Then, follow the official guide to run Kata Containers with Kubernetes.

Note

Minimum version of kubernetes 1.12 is required to use the RuntimeClass Feature for Kata Container runtime described below.

With your Kubernetes cluster ready, you can now proceed to deploy Cilium.

Deploy Cilium

Note

Make sure you have Helm 3 installed. Helm 2 is no longer supported.

Setup Helm repository:

  1. helm repo add cilium https://helm.cilium.io/

Deploy Cilium release via Helm:

Using CRI-O

Using CRI-containerd

  1. helm install cilium cilium/cilium --version 1.10.2 \
  2. --namespace kube-system \
  3. --set containerRuntime.integration=crio
  1. helm install cilium cilium/cilium --version 1.10.2 \
  2. --namespace kube-system \
  3. --set containerRuntime.integration=containerd

Warning

Kata containers do not work with Host-Reachable Services, or with kube-proxy replacement in strict mode. These features should be disabled with --set hostServices.enabled=false (default) and --set kubeProxyReplacement=disabled (or partial).

Both features rely on socket-based load-balancing, which is not possible given that Kata containers are virtual machines running with their own kernel. For kube-proxy replacement, this limitation is tracked with GitHub issue 15437.

Validate the Installation

Cilium CLI

Manually

Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).

Linux

macOS

Other

  1. curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
  2. sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
  3. sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
  4. rm cilium-linux-amd64.tar.gz{,.sha256sum}
  1. curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-darwin-amd64.tar.gz{,.sha256sum}
  2. shasum -a 256 -c cilium-darwin-amd64.tar.gz.sha256sum
  3. sudo tar xzvfC cilium-darwin-amd64.tar.gz /usr/local/bin
  4. rm cilium-darwin-amd64.tar.gz{,.sha256sum}

See the full page of releases.

To validate that Cilium has been properly installed, you can run

  1. $ cilium status --wait
  2. /¯¯\
  3. /¯¯\__/¯¯\ Cilium: OK
  4. \__/¯¯\__/ Operator: OK
  5. /¯¯\__/¯¯\ Hubble: disabled
  6. \__/¯¯\__/ ClusterMesh: disabled
  7. \__/
  8. DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
  9. Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
  10. Containers: cilium-operator Running: 2
  11. cilium Running: 2
  12. Image versions cilium quay.io/cilium/cilium:v1.9.5: 2
  13. cilium-operator quay.io/cilium/operator-generic:v1.9.5: 2

Run the following command to validate that your cluster has proper network connectivity:

  1. $ cilium connectivity test
  2. ℹ️ Monitor aggregation detected, will skip some flow validation steps
  3. [k8s-cluster] Creating namespace for connectivity check...
  4. (...)
  5. ---------------------------------------------------------------------------------------------------------------------
  6. 📋 Test Report
  7. ---------------------------------------------------------------------------------------------------------------------
  8. 69/69 tests successful (0 warnings)

Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉

You can monitor as Cilium and all required components are being installed:

  1. $ kubectl -n kube-system get pods --watch
  2. NAME READY STATUS RESTARTS AGE
  3. cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s
  4. cilium-s8w5m 0/1 PodInitializing 0 7s
  5. coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s
  6. coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s

It may take a couple of minutes for all components to come up:

  1. cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s
  2. cilium-s8w5m 1/1 Running 0 4m12s
  3. coredns-86c58d9df4-4g7dd 1/1 Running 0 13m
  4. coredns-86c58d9df4-4l6b2 1/1 Running 0 13m

You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.

  1. kubectl create ns cilium-test

Deploy the check with:

  1. kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:

  1. $ kubectl get pods -n cilium-test
  2. NAME READY STATUS RESTARTS AGE
  3. echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
  4. echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
  5. echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
  6. host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
  7. host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
  8. pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
  9. pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
  10. pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
  11. pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
  12. pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
  13. pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
  14. pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
  15. pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
  16. pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s

Note

If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending state. This is expected since these pods need at least 2 nodes to be scheduled successfully.

Once done with the test, remove the cilium-test namespace:

  1. kubectl delete ns cilium-test

Run Kata Containers with Cilium CNI

Now that your Kubernetes cluster is configured with the Kata Containers runtime and Cilium as the CNI, you can run a sample workload by following these instructions.