Quick Installation

This guide will walk you through the quick default installation. It will automatically detect and use the best configuration possible for the Kubernetes distribution you are using. All state is stored using Kubernetes custom resource definitions (CRDs).

This is the best installation method for most use cases. For large environments (> 500 nodes) or if you want to run specific datapath modes, refer to the Advanced Installation guide.

Should you encounter any issues during the installation, please refer to the Troubleshooting section and / or seek help on the Slack channel.

Create the Cluster

If you don’t have a Kubernetes Cluster yet, you can use the instructions below to create a Kubernetes cluster locally or using a managed Kubernetes service:

GKEAKS (BYOCNI)AKS (Azure IPAM)EKSkindminikubeRancher Desktop

The following commands create a Kubernetes cluster using Google Kubernetes Engine. See Installing Google Cloud SDK for instructions on how to install gcloud and prepare your account.

  1. export NAME="$(whoami)-$RANDOM"
  2. # Create the node pool with the following taint to guarantee that
  3. # Pods are only scheduled/executed in the node when Cilium is ready.
  4. # Alternatively, see the note below.
  5. gcloud container clusters create "${NAME}" \
  6. --node-taints node.cilium.io/agent-not-ready=true:NoExecute \
  7. --zone us-west2-a
  8. gcloud container clusters get-credentials "${NAME}" --zone us-west2-a

Note

Please make sure to read and understand the documentation page on taint effects and unmanaged pods.

Note

BYOCNI is the preferred way to run Cilium on AKS, however integration with the Azure stack via the Azure IPAM is not available. If you require Azure IPAM, refer to the AKS (Azure IPAM) installation.

The following commands create a Kubernetes cluster using Azure Kubernetes Service with no CNI plugin pre-installed (BYOCNI). See Azure Cloud CLI for instructions on how to install az and prepare your account, and the Bring your own CNI documentation for more details about BYOCNI prerequisites / implications.

Note

BYOCNI requires the aks-preview CLI extension with version >= 0.5.55, which itself requires an az CLI version >= 2.32.0 .

  1. export NAME="$(whoami)-$RANDOM"
  2. export AZURE_RESOURCE_GROUP="${NAME}-group"
  3. az group create --name "${AZURE_RESOURCE_GROUP}" -l westus2
  4. # Create AKS cluster
  5. az aks create \
  6. --resource-group "${AZURE_RESOURCE_GROUP}" \
  7. --name "${NAME}" \
  8. --network-plugin none
  9. # Get the credentials to access the cluster with kubectl
  10. az aks get-credentials --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}"

Note

Azure IPAM offers integration with the Azure stack but is not the preferred way to run Cilium on AKS. If you do not require Azure IPAM, we recommend you to switch to the AKS (BYOCNI) installation.

The following commands create a Kubernetes cluster using Azure Kubernetes Service. See Azure Cloud CLI for instructions on how to install az and prepare your account.

  1. export NAME="$(whoami)-$RANDOM"
  2. export AZURE_RESOURCE_GROUP="${NAME}-group"
  3. az group create --name "${AZURE_RESOURCE_GROUP}" -l westus2
  4. # Create AKS cluster
  5. az aks create \
  6. --resource-group "${AZURE_RESOURCE_GROUP}" \
  7. --name "${NAME}" \
  8. --network-plugin azure \
  9. --node-count 2
  10. # Get the credentials to access the cluster with kubectl
  11. az aks get-credentials --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}"

Attention

Do NOT specify the --network-policy flag when creating the cluster, as this will cause the Azure CNI plugin to install unwanted iptables rules.

The following commands create a Kubernetes cluster with eksctl using Amazon Elastic Kubernetes Service. See eksctl Installation for instructions on how to install eksctl and prepare your account.

  1. export NAME="$(whoami)-$RANDOM"
  2. cat <<EOF >eks-config.yaml
  3. apiVersion: eksctl.io/v1alpha5
  4. kind: ClusterConfig
  5. metadata:
  6. name: ${NAME}
  7. region: eu-west-1
  8. managedNodeGroups:
  9. - name: ng-1
  10. desiredCapacity: 2
  11. privateNetworking: true
  12. # taint nodes so that application pods are
  13. # not scheduled/executed until Cilium is deployed.
  14. # Alternatively, see the note below.
  15. taints:
  16. - key: "node.cilium.io/agent-not-ready"
  17. value: "true"
  18. effect: "NoExecute"
  19. EOF
  20. eksctl create cluster -f ./eks-config.yaml

Note

Please make sure to read and understand the documentation page on taint effects and unmanaged pods.

Install kind >= v0.7.0 per kind documentation: Installation and Usage

  1. curl -LO https://raw.githubusercontent.com/cilium/cilium/v1.12/Documentation/gettingstarted/kind-config.yaml
  2. kind create cluster --config=kind-config.yaml

Install minikube >= v1.12 as per minikube documentation: Install Minikube. The following command will bring up a single node minikube cluster prepared for installing cilium.

  1. minikube start --network-plugin=cni --cni=false

Note

From minikube v1.12.1+, cilium networking plugin can be enabled directly with --cni=cilium parameter in minikube start command. However, this may not install the latest version of cilium.

Install Rancher Desktop >= v1.1.0 as per Rancher Desktop documentation: Install Rancher Desktop.

Next you need to configure Rancher Desktop so to disable the builtin CNI so you can install Cilium.

Configuring Rancher Desktop is done using a YAML configuration file. This step is necessary in order to disable the default CNI and replace it with Cilium.

Next you need to start Rancher Desktop with containerd and create a override.yaml:

  1. env:
  2. # needed for cilium
  3. K3S_EXEC: '--flannel-backend=none --disable-network-policy'
  4. provision:
  5. # needs root to mount
  6. - mode: system
  7. script: |
  8. #!/bin/sh
  9. set -e
  10. # needed for cilium
  11. mount bpffs -t bpf /sys/fs/bpf
  12. mount --make-shared /sys/fs/bpf
  13. mkdir -p /run/cilium/cgroupv2
  14. mount -t cgroup2 none /run/cilium/cgroupv2
  15. mount --make-shared /run/cilium/cgroupv2/

After the file is created move it into your Rancher Desktop’s lima/_config directory:

LinuxmacOS

  1. cp override.yaml ~/.local/share/rancher-desktop/lima/_config/override.yaml
  1. cp override.yaml ~/Library/Application\ Support/rancher-desktop/lima/_config/override.yaml

Finally, open the Rancher Desktop UI and go to Kubernetes Settings panel and click “Reset Kubernetes”.

After a few minutes Rancher Desktop will start back up prepared for installing Cilium.

Install the Cilium CLI

Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).

LinuxmacOSOther

  1. CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
  2. CLI_ARCH=amd64
  3. if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
  4. curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
  5. sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
  6. sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
  7. rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
  1. CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
  2. CLI_ARCH=amd64
  3. if [ "$(uname -m)" = "arm64" ]; then CLI_ARCH=arm64; fi
  4. curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum}
  5. shasum -a 256 -c cilium-darwin-${CLI_ARCH}.tar.gz.sha256sum
  6. sudo tar xzvfC cilium-darwin-${CLI_ARCH}.tar.gz /usr/local/bin
  7. rm cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum}

See the full page of releases.

Install Cilium

You can install Cilium on any Kubernetes cluster. Pick one of the options below:

GenericGKEAKS (BYOCNI)AKS (Azure IPAM)EKSOpenShiftRKEk3s

These are the generic instructions on how to install Cilium into any Kubernetes cluster. The installer will attempt to automatically pick the best configuration options for you. Please see the other tabs for distribution/platform specific instructions which also list the ideal default configuration for particular platforms.

Requirements:

Tip

See System Requirements for more details on the system requirements.

Install Cilium

Install Cilium into the Kubernetes cluster pointed to by your current kubectl context:

  1. cilium install

To install Cilium on Google Kubernetes Engine (GKE), perform the following steps:

Default Configuration:

Datapath

IPAM

Datastore

Direct Routing

Kubernetes PodCIDR

Kubernetes CRD

Requirements:

  • The cluster should be created with the taint node.cilium.io/agent-not-ready=true:NoExecute using --node-taints option. However, there are other options. Please make sure to read and understand the documentation page on taint effects and unmanaged pods.

Install Cilium:

Install Cilium into the GKE cluster:

  1. cilium install

To install Cilium on Azure Kubernetes Service (AKS) in Bring your own CNI mode, perform the following steps:

Default Configuration:

Datapath

IPAM

Datastore

Encapsulation

Cluster Pool

Kubernetes CRD

Note

BYOCNI is the preferred way to run Cilium on AKS, however integration with the Azure stack via the Azure IPAM is not available. If you require Azure IPAM, refer to the AKS (Azure IPAM) installation.

Requirements:

  • The AKS cluster must be created with --network-plugin none (BYOCNI). See the Bring your own CNI documentation for more details about BYOCNI prerequisites / implications.

Install Cilium:

Install Cilium into the AKS cluster:

  1. cilium install --azure-resource-group "${AZURE_RESOURCE_GROUP}"

To install Cilium on Azure Kubernetes Service (AKS) with Azure integration via Azure IPAM, perform the following steps:

Default Configuration:

Datapath

IPAM

Datastore

Direct Routing

Azure IPAM

Kubernetes CRD

Note

Azure IPAM offers integration with the Azure stack but is not the preferred way to run Cilium on AKS. If you do not require Azure IPAM, we recommend you to switch to the AKS (BYOCNI) installation.

Tip

If you want to chain Cilium on top of the Azure CNI, refer to the guide Azure CNI.

Requirements:

  • The AKS cluster must be created with --network-plugin azure for compatibility with Cilium. The Azure network plugin will be replaced with Cilium by the installer.

Limitations:

  • All VMs and VM scale sets used in a cluster must belong to the same resource group.

  • Adding new nodes to node pools might result in application pods being scheduled on the new nodes before Cilium is ready to properly manage them. The only way to fix this is either by making sure application pods are not scheduled on new nodes before Cilium is ready, or by restarting any unmanaged pods on the nodes once Cilium is ready.

    Ideally we would recommend node pools should be tainted with node.cilium.io/agent-not-ready=true:NoExecute to ensure application pods will only be scheduled/executed once Cilium is ready to manage them (see Considerations on node pool taints and unmanaged pods for more details), however this is not an option on AKS clusters:

    • It is not possible to assign custom node taints such as node.cilium.io/agent-not-ready=true:NoExecute to system node pools, cf. Azure/AKS#2578: only CriticalAddonsOnly=true:NoSchedule is available for our use case. To make matters worse, it is not possible to assign taints to the initial node pool created for new AKS clusters, cf. Azure/AKS#1402.

    • Custom node taints on user node pools cannot be properly managed at will anymore, cf. Azure/AKS#2934.

    • These issues prevent usage of our previously recommended scenario via replacement of initial system node pool with CriticalAddonsOnly=true:NoSchedule and usage of additional user node pools with node.cilium.io/agent-not-ready=true:NoExecute.

    We do not have a standard and foolproof alternative to recommend, hence the only solution is to craft a custom mechanism that will work in your environment to handle this scenario when adding new nodes to AKS clusters.

Install Cilium:

Install Cilium into the AKS cluster:

  1. cilium install --azure-resource-group "${AZURE_RESOURCE_GROUP}"

To install Cilium on Amazon Elastic Kubernetes Service (EKS), perform the following steps:

Default Configuration:

Datapath

IPAM

Datastore

Direct Routing (ENI)

AWS ENI

Kubernetes CRD

For more information on AWS ENI mode, see AWS ENI.

Tip

If you want to chain Cilium on top of the AWS CNI, refer to the guide AWS VPC CNI plugin.

Requirements:

  • The EKS Managed Nodegroups must be properly tainted to ensure applications pods are properly managed by Cilium:

    • managedNodeGroups should be tainted with node.cilium.io/agent-not-ready=true:NoExecute to ensure application pods will only be scheduled once Cilium is ready to manage them. However, there are other options. Please make sure to read and understand the documentation page on taint effects and unmanaged pods.

      Below is an example on how to use ClusterConfig file to create the cluster:

      1. apiVersion: eksctl.io/v1alpha5
      2. kind: ClusterConfig
      3. ...
      4. managedNodeGroups:
      5. - name: ng-1
      6. ...
      7. # taint nodes so that application pods are
      8. # not scheduled/executed until Cilium is deployed.
      9. # Alternatively, see the note above regarding taint effects.
      10. taints:
      11. - key: "node.cilium.io/agent-not-ready"
      12. value: "true"
      13. effect: "NoExecute"

Limitations:

  • The AWS ENI integration of Cilium is currently only enabled for IPv4. If you want to use IPv6, use a datapath/IPAM mode other than ENI.

Install Cilium:

Install Cilium into the EKS cluster.

  1. cilium install
  2. cilium status --wait

To install Cilium on OpenShift, perform the following steps:

Default Configuration:

Datapath

IPAM

Datastore

Encapsulation

Cluster Pool

Kubernetes CRD

Requirements:

  • OpenShift 4.x

Install Cilium:

Cilium is a Certified OpenShift CNI Plugin and is best installed when an OpenShift cluster is created using the OpenShift installer. Please refer to Installation on OpenShift OKD for more information.

To install Cilium on Rancher Kubernetes Engine (RKE), perform the following steps:

Note

If you are using RKE2, Cilium has been directly integrated. Please see Using Cilium in the RKE2 documentation. You can use either method.

Default Configuration:

Datapath

IPAM

Datastore

Encapsulation

Cluster Pool

Kubernetes CRD

Requirements:

  • Follow the RKE Installation Guide with the below change:

    From:

    1. network:
    2. options:
    3. flannel_backend_type: "vxlan"
    4. plugin: "canal"

    To:

    1. network:
    2. plugin: none

Install Cilium:

Install Cilium into your newly created RKE cluster:

  1. cilium install

To install Cilium on k3s, perform the following steps:

Default Configuration:

Datapath

IPAM

Datastore

Encapsulation

Cluster Pool

Kubernetes CRD

Requirements:

  • Install your k3s cluster as you normally would but making sure to disable support for the default CNI plugin and the built-in network policy enforcer so you can install Cilium on top:
  1. curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--flannel-backend=none --disable-network-policy' sh -
  • For the Cilium CLI to access the cluster in successive steps you will need to use the kubeconfig file stored at /etc/rancher/k3s/k3s.yaml by setting the KUBECONFIG environment variable:
  1. export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

Install Cilium:

Install Cilium into your newly created Kubernetes cluster:

  1. cilium install

If the installation fails for some reason, run cilium status to retrieve the overall status of the Cilium deployment and inspect the logs of whatever pods are failing to be deployed.

Tip

You may be seeing cilium install print something like this:

  1. ♻️ Restarted unmanaged pod kube-system/event-exporter-gke-564fb97f9-rv8hg
  2. ♻️ Restarted unmanaged pod kube-system/kube-dns-6465f78586-hlcrz
  3. ♻️ Restarted unmanaged pod kube-system/kube-dns-autoscaler-7f89fb6b79-fsmsg
  4. ♻️ Restarted unmanaged pod kube-system/l7-default-backend-7fd66b8b88-qqhh5
  5. ♻️ Restarted unmanaged pod kube-system/metrics-server-v0.3.6-7b5cdbcbb8-kjl65
  6. ♻️ Restarted unmanaged pod kube-system/stackdriver-metadata-agent-cluster-level-6cc964cddf-8n2rt

This indicates that your cluster was already running some pods before Cilium was deployed and the installer has automatically restarted them to ensure all pods get networking provided by Cilium.

Validate the Installation

To validate that Cilium has been properly installed, you can run

  1. $ cilium status --wait
  2. /¯¯\
  3. /¯¯\__/¯¯\ Cilium: OK
  4. \__/¯¯\__/ Operator: OK
  5. /¯¯\__/¯¯\ Hubble: disabled
  6. \__/¯¯\__/ ClusterMesh: disabled
  7. \__/
  8. DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
  9. Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
  10. Containers: cilium-operator Running: 2
  11. cilium Running: 2
  12. Image versions cilium quay.io/cilium/cilium:v1.9.5: 2
  13. cilium-operator quay.io/cilium/operator-generic:v1.9.5: 2

Run the following command to validate that your cluster has proper network connectivity:

  1. $ cilium connectivity test
  2. ℹ️ Monitor aggregation detected, will skip some flow validation steps
  3. [k8s-cluster] Creating namespace for connectivity check...
  4. (...)
  5. ---------------------------------------------------------------------------------------------------------------------
  6. 📋 Test Report
  7. ---------------------------------------------------------------------------------------------------------------------
  8. 69/69 tests successful (0 warnings)

Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉

Next Steps